Test Report: QEMU_macOS 19087

                    
                      1e692642013946ace2b084076a09075a835a9418:2024-06-17:34933
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.33
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.01
27 TestAddons/Setup 10.14
28 TestCertOptions 10.13
29 TestCertExpiration 195.47
30 TestDockerFlags 10.26
31 TestForceSystemdFlag 10.44
32 TestForceSystemdEnv 10.23
38 TestErrorSpam/setup 9.8
47 TestFunctional/serial/StartWithProxy 10.01
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
61 TestFunctional/serial/MinikubeKubectlCmd 0.63
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.82
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.09
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 112.47
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.41
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.42
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 25.78
141 TestMultiControlPlane/serial/StartCluster 10.2
142 TestMultiControlPlane/serial/DeployApp 115.26
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 59.41
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.48
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.52
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 9.81
165 TestJSONOutput/start/Command 9.72
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.27
197 TestMountStart/serial/StartWithMountFirst 10.13
200 TestMultiNode/serial/FreshStart2Nodes 9.98
201 TestMultiNode/serial/DeployApp2Nodes 91.09
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.13
208 TestMultiNode/serial/StartAfterStop 40.01
209 TestMultiNode/serial/RestartKeepsNodes 8.72
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.63
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.14
217 TestPreload 10.11
219 TestScheduledStopUnix 9.98
220 TestSkaffold 13.59
223 TestRunningBinaryUpgrade 609.45
225 TestKubernetesUpgrade 18.69
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.12
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.23
241 TestStoppedBinaryUpgrade/Upgrade 578.16
243 TestPause/serial/Start 10.08
253 TestNoKubernetes/serial/StartWithK8s 9.89
254 TestNoKubernetes/serial/StartWithStopK8s 5.44
255 TestNoKubernetes/serial/Start 5.41
259 TestNoKubernetes/serial/StartNoArgs 5.5
261 TestNetworkPlugins/group/auto/Start 9.93
262 TestNetworkPlugins/group/kindnet/Start 9.85
263 TestNetworkPlugins/group/calico/Start 10.01
264 TestNetworkPlugins/group/custom-flannel/Start 9.82
265 TestNetworkPlugins/group/false/Start 9.84
266 TestNetworkPlugins/group/enable-default-cni/Start 9.98
267 TestNetworkPlugins/group/flannel/Start 9.85
268 TestNetworkPlugins/group/bridge/Start 9.83
269 TestNetworkPlugins/group/kubenet/Start 9.9
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.94
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.88
285 TestStartStop/group/embed-certs/serial/FirstStart 10.02
286 TestStartStop/group/no-preload/serial/DeployApp 0.1
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
289 TestStartStop/group/embed-certs/serial/DeployApp 0.09
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
293 TestStartStop/group/no-preload/serial/SecondStart 5.27
295 TestStartStop/group/embed-certs/serial/SecondStart 7.35
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/no-preload/serial/Pause 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.98
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/embed-certs/serial/Pause 0.11
307 TestStartStop/group/newest-cni/serial/FirstStart 9.92
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.17
317 TestStartStop/group/newest-cni/serial/SecondStart 5.26
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (20.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-246000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-246000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (20.32574775s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7e0abe67-e0b4-4cb0-a916-2614e95edd16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-246000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"13bb7778-51f9-4c9e-8ce9-95ba8b645d24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19087"}}
	{"specversion":"1.0","id":"bb495e6f-3a3a-426c-b19d-7e080e77ae25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig"}}
	{"specversion":"1.0","id":"717ef7a1-4b15-47ad-92f3-6383a62b9b71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"970d0043-bdcd-4145-ac89-61645057f084","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d348ed30-1306-4ae2-aaf0-378f5d831a6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube"}}
	{"specversion":"1.0","id":"a65bc538-3b2e-48c8-9fd7-2c804cc2b4f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"90be424a-963e-407b-8a44-3e9eff257fce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e0580f9-0c7d-40e0-9e3c-6eddb59e7b74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4e3140b9-4deb-4033-a206-62cb39164aba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"18aefed5-e219-4ce7-b1ec-3256a55ff2df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-246000\" primary control-plane node in \"download-only-246000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"420d5ebb-d996-46f3-94cd-cf6789ad8ec1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1eef80fe-13fe-43cb-b16c-915e58de89ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104845900 0x104845900 0x104845900 0x104845900 0x104845900 0x104845900 0x104845900] Decompressors:map[bz2:0x140005d3c50 gz:0x140005d3c58 tar:0x140005d3bf0 tar.bz2:0x140005d3c10 tar.gz:0x140005d3c20 tar.xz:0x140005d3c30 tar.zst:0x140005d3c40 tbz2:0x140005d3c10 tgz:0x14
0005d3c20 txz:0x140005d3c30 tzst:0x140005d3c40 xz:0x140005d3c60 zip:0x140005d3c70 zst:0x140005d3c68] Getters:map[file:0x14000063520 http:0x1400081e190 https:0x1400081e1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"27825420-d88b-48bf-8c88-66bd2039de42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:26:13.874124    6542 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:26:13.874272    6542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:13.874276    6542 out.go:304] Setting ErrFile to fd 2...
	I0617 04:26:13.874278    6542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:13.874415    6542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	W0617 04:26:13.874514    6542 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19087-6045/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19087-6045/.minikube/config/config.json: no such file or directory
	I0617 04:26:13.875816    6542 out.go:298] Setting JSON to true
	I0617 04:26:13.893610    6542 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3343,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:26:13.893671    6542 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:26:13.897732    6542 out.go:97] [download-only-246000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:26:13.901729    6542 out.go:169] MINIKUBE_LOCATION=19087
	I0617 04:26:13.897869    6542 notify.go:220] Checking for updates...
	W0617 04:26:13.897906    6542 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball: no such file or directory
	I0617 04:26:13.910647    6542 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:26:13.914821    6542 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:26:13.920698    6542 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:26:13.924713    6542 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	W0617 04:26:13.929699    6542 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0617 04:26:13.929899    6542 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:26:13.932713    6542 out.go:97] Using the qemu2 driver based on user configuration
	I0617 04:26:13.932731    6542 start.go:297] selected driver: qemu2
	I0617 04:26:13.932734    6542 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:26:13.932801    6542 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:26:13.935736    6542 out.go:169] Automatically selected the socket_vmnet network
	I0617 04:26:13.941017    6542 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0617 04:26:13.941131    6542 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 04:26:13.941159    6542 cni.go:84] Creating CNI manager for ""
	I0617 04:26:13.941177    6542 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0617 04:26:13.941229    6542 start.go:340] cluster config:
	{Name:download-only-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:26:13.946136    6542 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:26:13.950780    6542 out.go:97] Downloading VM boot image ...
	I0617 04:26:13.950817    6542 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso
	I0617 04:26:22.289295    6542 out.go:97] Starting "download-only-246000" primary control-plane node in "download-only-246000" cluster
	I0617 04:26:22.289321    6542 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:26:22.399226    6542 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0617 04:26:22.399270    6542 cache.go:56] Caching tarball of preloaded images
	I0617 04:26:22.400216    6542 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:26:22.404453    6542 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0617 04:26:22.404465    6542 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:22.632817    6542 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0617 04:26:32.948289    6542 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:32.948476    6542 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:33.644647    6542 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0617 04:26:33.644844    6542 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/download-only-246000/config.json ...
	I0617 04:26:33.644863    6542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/download-only-246000/config.json: {Name:mk162b574b25804148683088f31df764079244a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:26:33.645930    6542 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:26:33.646127    6542 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0617 04:26:34.123083    6542 out.go:169] 
	W0617 04:26:34.128610    6542 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104845900 0x104845900 0x104845900 0x104845900 0x104845900 0x104845900 0x104845900] Decompressors:map[bz2:0x140005d3c50 gz:0x140005d3c58 tar:0x140005d3bf0 tar.bz2:0x140005d3c10 tar.gz:0x140005d3c20 tar.xz:0x140005d3c30 tar.zst:0x140005d3c40 tbz2:0x140005d3c10 tgz:0x140005d3c20 txz:0x140005d3c30 tzst:0x140005d3c40 xz:0x140005d3c60 zip:0x140005d3c70 zst:0x140005d3c68] Getters:map[file:0x14000063520 http:0x1400081e190 https:0x1400081e1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0617 04:26:34.128636    6542 out_reason.go:110] 
	W0617 04:26:34.135084    6542 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:26:34.139077    6542 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-246000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (20.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-326000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-326000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.871303042s)

                                                
                                                
-- stdout --
	* [offline-docker-326000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-326000" primary control-plane node in "offline-docker-326000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-326000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:38:10.167144    8089 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:38:10.167283    8089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:10.167288    8089 out.go:304] Setting ErrFile to fd 2...
	I0617 04:38:10.167292    8089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:10.167423    8089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:38:10.168627    8089 out.go:298] Setting JSON to false
	I0617 04:38:10.186102    8089 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4060,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:38:10.186192    8089 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:38:10.191214    8089 out.go:177] * [offline-docker-326000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:38:10.199149    8089 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:38:10.199163    8089 notify.go:220] Checking for updates...
	I0617 04:38:10.204269    8089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:38:10.207251    8089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:38:10.208516    8089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:38:10.211251    8089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:38:10.214259    8089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:38:10.217634    8089 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:38:10.217691    8089 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:38:10.221164    8089 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:38:10.228245    8089 start.go:297] selected driver: qemu2
	I0617 04:38:10.228257    8089 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:38:10.228265    8089 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:38:10.230125    8089 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:38:10.233200    8089 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:38:10.236269    8089 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:38:10.236301    8089 cni.go:84] Creating CNI manager for ""
	I0617 04:38:10.236307    8089 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:38:10.236310    8089 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:38:10.236339    8089 start.go:340] cluster config:
	{Name:offline-docker-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-326000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:38:10.240669    8089 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:38:10.248202    8089 out.go:177] * Starting "offline-docker-326000" primary control-plane node in "offline-docker-326000" cluster
	I0617 04:38:10.252158    8089 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:38:10.252191    8089 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:38:10.252202    8089 cache.go:56] Caching tarball of preloaded images
	I0617 04:38:10.252287    8089 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:38:10.252294    8089 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:38:10.252356    8089 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/offline-docker-326000/config.json ...
	I0617 04:38:10.252369    8089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/offline-docker-326000/config.json: {Name:mkaeafdc0111c174bb00e63279f929c8b5bd3c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:38:10.252678    8089 start.go:360] acquireMachinesLock for offline-docker-326000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:10.252714    8089 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "offline-docker-326000"
	I0617 04:38:10.252724    8089 start.go:93] Provisioning new machine with config: &{Name:offline-docker-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-326000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:10.252758    8089 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:10.256192    8089 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:10.271595    8089 start.go:159] libmachine.API.Create for "offline-docker-326000" (driver="qemu2")
	I0617 04:38:10.271624    8089 client.go:168] LocalClient.Create starting
	I0617 04:38:10.271691    8089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:10.271722    8089 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:10.271734    8089 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:10.271778    8089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:10.271802    8089 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:10.271812    8089 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:10.272194    8089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:10.420629    8089 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:10.571590    8089 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:10.571600    8089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:10.571816    8089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2
	I0617 04:38:10.585219    8089 main.go:141] libmachine: STDOUT: 
	I0617 04:38:10.585244    8089 main.go:141] libmachine: STDERR: 
	I0617 04:38:10.585307    8089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2 +20000M
	I0617 04:38:10.597989    8089 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:10.598019    8089 main.go:141] libmachine: STDERR: 
	I0617 04:38:10.598045    8089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2
	I0617 04:38:10.598050    8089 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:10.598086    8089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:ea:ce:55:17:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2
	I0617 04:38:10.600101    8089 main.go:141] libmachine: STDOUT: 
	I0617 04:38:10.600120    8089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:10.600140    8089 client.go:171] duration metric: took 328.515125ms to LocalClient.Create
	I0617 04:38:12.602193    8089 start.go:128] duration metric: took 2.349450416s to createHost
	I0617 04:38:12.602211    8089 start.go:83] releasing machines lock for "offline-docker-326000", held for 2.349516375s
	W0617 04:38:12.602226    8089 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:12.611328    8089 out.go:177] * Deleting "offline-docker-326000" in qemu2 ...
	W0617 04:38:12.623707    8089 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:12.623718    8089 start.go:728] Will try again in 5 seconds ...
	I0617 04:38:17.625917    8089 start.go:360] acquireMachinesLock for offline-docker-326000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:17.626483    8089 start.go:364] duration metric: took 445.166µs to acquireMachinesLock for "offline-docker-326000"
	I0617 04:38:17.626627    8089 start.go:93] Provisioning new machine with config: &{Name:offline-docker-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-326000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:17.626893    8089 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:17.636499    8089 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:17.688679    8089 start.go:159] libmachine.API.Create for "offline-docker-326000" (driver="qemu2")
	I0617 04:38:17.688746    8089 client.go:168] LocalClient.Create starting
	I0617 04:38:17.688858    8089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:17.688917    8089 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:17.688935    8089 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:17.689005    8089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:17.689049    8089 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:17.689060    8089 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:17.689800    8089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:17.849832    8089 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:17.950948    8089 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:17.950957    8089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:17.951132    8089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2
	I0617 04:38:17.963577    8089 main.go:141] libmachine: STDOUT: 
	I0617 04:38:17.963596    8089 main.go:141] libmachine: STDERR: 
	I0617 04:38:17.963652    8089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2 +20000M
	I0617 04:38:17.974447    8089 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:17.974466    8089 main.go:141] libmachine: STDERR: 
	I0617 04:38:17.974486    8089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2
	I0617 04:38:17.974491    8089 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:17.974535    8089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:0a:95:2f:df:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/offline-docker-326000/disk.qcow2
	I0617 04:38:17.976168    8089 main.go:141] libmachine: STDOUT: 
	I0617 04:38:17.976190    8089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:17.976204    8089 client.go:171] duration metric: took 287.454ms to LocalClient.Create
	I0617 04:38:19.978250    8089 start.go:128] duration metric: took 2.351361334s to createHost
	I0617 04:38:19.978270    8089 start.go:83] releasing machines lock for "offline-docker-326000", held for 2.351781708s
	W0617 04:38:19.978341    8089 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:19.983360    8089 out.go:177] 
	W0617 04:38:19.987574    8089 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:38:19.987578    8089 out.go:239] * 
	* 
	W0617 04:38:19.988033    8089 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:38:19.999543    8089 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-326000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-06-17 04:38:20.008174 -0700 PDT m=+726.146913792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-326000 -n offline-docker-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-326000 -n offline-docker-326000: exit status 7 (33.607625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-326000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-326000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-326000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (10.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-585000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-585000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.13673775s)

                                                
                                                
-- stdout --
	* [addons-585000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-585000" primary control-plane node in "addons-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:26:46.442816    6654 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:26:46.442988    6654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:46.442991    6654 out.go:304] Setting ErrFile to fd 2...
	I0617 04:26:46.442993    6654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:46.443145    6654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:26:46.444380    6654 out.go:298] Setting JSON to false
	I0617 04:26:46.461134    6654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3376,"bootTime":1718620230,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:26:46.461195    6654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:26:46.466054    6654 out.go:177] * [addons-585000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:26:46.472915    6654 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:26:46.472948    6654 notify.go:220] Checking for updates...
	I0617 04:26:46.477062    6654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:26:46.479959    6654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:26:46.481284    6654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:26:46.484001    6654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:26:46.486993    6654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:26:46.490124    6654 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:26:46.494017    6654 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:26:46.500982    6654 start.go:297] selected driver: qemu2
	I0617 04:26:46.500989    6654 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:26:46.500996    6654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:26:46.503217    6654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:26:46.505994    6654 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:26:46.509072    6654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:26:46.509099    6654 cni.go:84] Creating CNI manager for ""
	I0617 04:26:46.509108    6654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:26:46.509112    6654 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:26:46.509144    6654 start.go:340] cluster config:
	{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:26:46.513657    6654 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:26:46.520991    6654 out.go:177] * Starting "addons-585000" primary control-plane node in "addons-585000" cluster
	I0617 04:26:46.525034    6654 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:26:46.525050    6654 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:26:46.525060    6654 cache.go:56] Caching tarball of preloaded images
	I0617 04:26:46.525123    6654 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:26:46.525129    6654 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:26:46.525373    6654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/addons-585000/config.json ...
	I0617 04:26:46.525385    6654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/addons-585000/config.json: {Name:mk270f4011d428c6b23821655d66dee3dd1beabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:26:46.525775    6654 start.go:360] acquireMachinesLock for addons-585000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:26:46.525843    6654 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "addons-585000"
	I0617 04:26:46.525856    6654 start.go:93] Provisioning new machine with config: &{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:26:46.525885    6654 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:26:46.534973    6654 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0617 04:26:46.554941    6654 start.go:159] libmachine.API.Create for "addons-585000" (driver="qemu2")
	I0617 04:26:46.554984    6654 client.go:168] LocalClient.Create starting
	I0617 04:26:46.555110    6654 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:26:46.616365    6654 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:26:46.689738    6654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:26:46.970567    6654 main.go:141] libmachine: Creating SSH key...
	I0617 04:26:47.097349    6654 main.go:141] libmachine: Creating Disk image...
	I0617 04:26:47.097356    6654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:26:47.097526    6654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2
	I0617 04:26:47.110298    6654 main.go:141] libmachine: STDOUT: 
	I0617 04:26:47.110332    6654 main.go:141] libmachine: STDERR: 
	I0617 04:26:47.110382    6654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2 +20000M
	I0617 04:26:47.121349    6654 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:26:47.121368    6654 main.go:141] libmachine: STDERR: 
	I0617 04:26:47.121379    6654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2
	I0617 04:26:47.121384    6654 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:26:47.121420    6654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:be:15:87:c8:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2
	I0617 04:26:47.123116    6654 main.go:141] libmachine: STDOUT: 
	I0617 04:26:47.123133    6654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:26:47.123157    6654 client.go:171] duration metric: took 568.183833ms to LocalClient.Create
	I0617 04:26:49.125304    6654 start.go:128] duration metric: took 2.599474333s to createHost
	I0617 04:26:49.125358    6654 start.go:83] releasing machines lock for "addons-585000", held for 2.599582334s
	W0617 04:26:49.125427    6654 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:26:49.143806    6654 out.go:177] * Deleting "addons-585000" in qemu2 ...
	W0617 04:26:49.177065    6654 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:26:49.177095    6654 start.go:728] Will try again in 5 seconds ...
	I0617 04:26:54.179141    6654 start.go:360] acquireMachinesLock for addons-585000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:26:54.179580    6654 start.go:364] duration metric: took 350.875µs to acquireMachinesLock for "addons-585000"
	I0617 04:26:54.179733    6654 start.go:93] Provisioning new machine with config: &{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:26:54.180038    6654 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:26:54.188929    6654 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0617 04:26:54.237953    6654 start.go:159] libmachine.API.Create for "addons-585000" (driver="qemu2")
	I0617 04:26:54.238001    6654 client.go:168] LocalClient.Create starting
	I0617 04:26:54.238111    6654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:26:54.238167    6654 main.go:141] libmachine: Decoding PEM data...
	I0617 04:26:54.238181    6654 main.go:141] libmachine: Parsing certificate...
	I0617 04:26:54.238299    6654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:26:54.238343    6654 main.go:141] libmachine: Decoding PEM data...
	I0617 04:26:54.238354    6654 main.go:141] libmachine: Parsing certificate...
	I0617 04:26:54.238865    6654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:26:54.395471    6654 main.go:141] libmachine: Creating SSH key...
	I0617 04:26:54.478716    6654 main.go:141] libmachine: Creating Disk image...
	I0617 04:26:54.478721    6654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:26:54.478881    6654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2
	I0617 04:26:54.491489    6654 main.go:141] libmachine: STDOUT: 
	I0617 04:26:54.491508    6654 main.go:141] libmachine: STDERR: 
	I0617 04:26:54.491558    6654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2 +20000M
	I0617 04:26:54.502943    6654 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:26:54.502975    6654 main.go:141] libmachine: STDERR: 
	I0617 04:26:54.502989    6654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2
	I0617 04:26:54.502993    6654 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:26:54.503028    6654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f7:d7:58:b3:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/addons-585000/disk.qcow2
	I0617 04:26:54.504857    6654 main.go:141] libmachine: STDOUT: 
	I0617 04:26:54.504870    6654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:26:54.504883    6654 client.go:171] duration metric: took 266.884709ms to LocalClient.Create
	I0617 04:26:56.506807    6654 start.go:128] duration metric: took 2.32678775s to createHost
	I0617 04:26:56.507100    6654 start.go:83] releasing machines lock for "addons-585000", held for 2.327550792s
	W0617 04:26:56.507473    6654 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:26:56.516664    6654 out.go:177] 
	W0617 04:26:56.524877    6654 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:26:56.524937    6654 out.go:239] * 
	* 
	W0617 04:26:56.527720    6654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:26:56.536697    6654 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-585000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.14s)

                                                
                                    
x
+
TestCertOptions (10.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-907000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-907000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.833618667s)

                                                
                                                
-- stdout --
	* [cert-options-907000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-907000" primary control-plane node in "cert-options-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-907000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-907000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-907000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (86.009292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-907000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-907000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-907000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-907000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-907000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-907000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.030417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-907000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-907000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-907000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-907000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-907000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-06-17 04:38:50.621545 -0700 PDT m=+756.760600001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-907000 -n cert-options-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-907000 -n cert-options-907000: exit status 7 (29.265042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-907000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-907000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-907000
--- FAIL: TestCertOptions (10.13s)

                                                
                                    
x
+
TestCertExpiration (195.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-317000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-317000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.118403375s)

                                                
                                                
-- stdout --
	* [cert-expiration-317000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-317000" primary control-plane node in "cert-expiration-317000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-317000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-317000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-317000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-317000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-317000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.209182583s)

                                                
                                                
-- stdout --
	* [cert-expiration-317000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-317000" primary control-plane node in "cert-expiration-317000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-317000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-317000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-317000" primary control-plane node in "cert-expiration-317000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-06-17 04:41:50.696613 -0700 PDT m=+936.837525626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-317000 -n cert-expiration-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-317000 -n cert-expiration-317000: exit status 7 (33.103041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-317000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-317000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-317000
--- FAIL: TestCertExpiration (195.47s)

                                                
                                    
x
+
TestDockerFlags (10.26s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-458000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-458000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.003514208s)

                                                
                                                
-- stdout --
	* [docker-flags-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-458000" primary control-plane node in "docker-flags-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:38:30.395858    8285 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:38:30.395981    8285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:30.395984    8285 out.go:304] Setting ErrFile to fd 2...
	I0617 04:38:30.395987    8285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:30.396135    8285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:38:30.397254    8285 out.go:298] Setting JSON to false
	I0617 04:38:30.413512    8285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4080,"bootTime":1718620230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:38:30.413595    8285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:38:30.420769    8285 out.go:177] * [docker-flags-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:38:30.427679    8285 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:38:30.430742    8285 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:38:30.427711    8285 notify.go:220] Checking for updates...
	I0617 04:38:30.433708    8285 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:38:30.436687    8285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:38:30.440699    8285 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:38:30.443598    8285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:38:30.447046    8285 config.go:182] Loaded profile config "force-systemd-flag-192000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:38:30.447111    8285 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:38:30.447151    8285 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:38:30.451670    8285 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:38:30.458687    8285 start.go:297] selected driver: qemu2
	I0617 04:38:30.458694    8285 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:38:30.458700    8285 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:38:30.461005    8285 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:38:30.464748    8285 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:38:30.467794    8285 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0617 04:38:30.467832    8285 cni.go:84] Creating CNI manager for ""
	I0617 04:38:30.467841    8285 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:38:30.467847    8285 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:38:30.467883    8285 start.go:340] cluster config:
	{Name:docker-flags-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:38:30.472556    8285 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:38:30.480669    8285 out.go:177] * Starting "docker-flags-458000" primary control-plane node in "docker-flags-458000" cluster
	I0617 04:38:30.484610    8285 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:38:30.484629    8285 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:38:30.484636    8285 cache.go:56] Caching tarball of preloaded images
	I0617 04:38:30.484692    8285 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:38:30.484697    8285 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:38:30.484764    8285 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/docker-flags-458000/config.json ...
	I0617 04:38:30.484775    8285 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/docker-flags-458000/config.json: {Name:mk146d4e60e7e39a2a870357043495dbd00ad960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:38:30.485027    8285 start.go:360] acquireMachinesLock for docker-flags-458000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:30.485072    8285 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "docker-flags-458000"
	I0617 04:38:30.485084    8285 start.go:93] Provisioning new machine with config: &{Name:docker-flags-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:30.485123    8285 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:30.491695    8285 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:30.510167    8285 start.go:159] libmachine.API.Create for "docker-flags-458000" (driver="qemu2")
	I0617 04:38:30.510204    8285 client.go:168] LocalClient.Create starting
	I0617 04:38:30.510277    8285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:30.510314    8285 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:30.510328    8285 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:30.510370    8285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:30.510393    8285 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:30.510398    8285 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:30.510834    8285 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:30.658388    8285 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:30.748853    8285 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:30.748858    8285 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:30.749028    8285 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2
	I0617 04:38:30.761672    8285 main.go:141] libmachine: STDOUT: 
	I0617 04:38:30.761688    8285 main.go:141] libmachine: STDERR: 
	I0617 04:38:30.761756    8285 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2 +20000M
	I0617 04:38:30.772910    8285 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:30.772923    8285 main.go:141] libmachine: STDERR: 
	I0617 04:38:30.772942    8285 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2
	I0617 04:38:30.772946    8285 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:30.772993    8285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:96:71:7a:4c:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2
	I0617 04:38:30.774611    8285 main.go:141] libmachine: STDOUT: 
	I0617 04:38:30.774624    8285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:30.774642    8285 client.go:171] duration metric: took 264.434167ms to LocalClient.Create
	I0617 04:38:32.776829    8285 start.go:128] duration metric: took 2.291708042s to createHost
	I0617 04:38:32.776897    8285 start.go:83] releasing machines lock for "docker-flags-458000", held for 2.29183825s
	W0617 04:38:32.776947    8285 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:32.802137    8285 out.go:177] * Deleting "docker-flags-458000" in qemu2 ...
	W0617 04:38:32.826575    8285 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:32.826596    8285 start.go:728] Will try again in 5 seconds ...
	I0617 04:38:37.828633    8285 start.go:360] acquireMachinesLock for docker-flags-458000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:37.828876    8285 start.go:364] duration metric: took 155.375µs to acquireMachinesLock for "docker-flags-458000"
	I0617 04:38:37.828919    8285 start.go:93] Provisioning new machine with config: &{Name:docker-flags-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:37.829070    8285 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:37.838523    8285 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:37.877633    8285 start.go:159] libmachine.API.Create for "docker-flags-458000" (driver="qemu2")
	I0617 04:38:37.877681    8285 client.go:168] LocalClient.Create starting
	I0617 04:38:37.877789    8285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:37.877849    8285 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:37.877863    8285 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:37.877931    8285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:37.877969    8285 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:37.877978    8285 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:37.879131    8285 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:38.045600    8285 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:38.297759    8285 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:38.297768    8285 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:38.297994    8285 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2
	I0617 04:38:38.311159    8285 main.go:141] libmachine: STDOUT: 
	I0617 04:38:38.311184    8285 main.go:141] libmachine: STDERR: 
	I0617 04:38:38.311241    8285 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2 +20000M
	I0617 04:38:38.322135    8285 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:38.322151    8285 main.go:141] libmachine: STDERR: 
	I0617 04:38:38.322161    8285 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2
	I0617 04:38:38.322169    8285 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:38.322216    8285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:e7:42:4c:db:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/docker-flags-458000/disk.qcow2
	I0617 04:38:38.323882    8285 main.go:141] libmachine: STDOUT: 
	I0617 04:38:38.323897    8285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:38.323920    8285 client.go:171] duration metric: took 446.229334ms to LocalClient.Create
	I0617 04:38:40.326068    8285 start.go:128] duration metric: took 2.496995875s to createHost
	I0617 04:38:40.326140    8285 start.go:83] releasing machines lock for "docker-flags-458000", held for 2.497271042s
	W0617 04:38:40.326591    8285 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:40.335195    8285 out.go:177] 
	W0617 04:38:40.343191    8285 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:38:40.343218    8285 out.go:239] * 
	* 
	W0617 04:38:40.345889    8285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:38:40.356162    8285 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-458000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-458000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-458000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.955708ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-458000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-458000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-458000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-458000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-458000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-458000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-458000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-458000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-458000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.580375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-458000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-458000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-458000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-458000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-458000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-458000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-06-17 04:38:40.495058 -0700 PDT m=+746.634008584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-458000 -n docker-flags-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-458000 -n docker-flags-458000: exit status 7 (31.311417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-458000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-458000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-458000
--- FAIL: TestDockerFlags (10.26s)

                                                
                                    
x
+
TestForceSystemdFlag (10.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-192000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-192000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.229681333s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-192000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-192000" primary control-plane node in "force-systemd-flag-192000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-192000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:38:24.950502    8263 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:38:24.950667    8263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:24.950670    8263 out.go:304] Setting ErrFile to fd 2...
	I0617 04:38:24.950672    8263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:24.950795    8263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:38:24.951834    8263 out.go:298] Setting JSON to false
	I0617 04:38:24.967769    8263 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4074,"bootTime":1718620230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:38:24.967832    8263 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:38:24.974867    8263 out.go:177] * [force-systemd-flag-192000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:38:24.982915    8263 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:38:24.989790    8263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:38:24.982951    8263 notify.go:220] Checking for updates...
	I0617 04:38:24.996794    8263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:38:24.999796    8263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:38:25.002774    8263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:38:25.005762    8263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:38:25.009140    8263 config.go:182] Loaded profile config "force-systemd-env-389000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:38:25.009219    8263 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:38:25.009272    8263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:38:25.013702    8263 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:38:25.020785    8263 start.go:297] selected driver: qemu2
	I0617 04:38:25.020790    8263 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:38:25.020794    8263 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:38:25.022924    8263 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:38:25.026747    8263 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:38:25.030849    8263 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 04:38:25.030863    8263 cni.go:84] Creating CNI manager for ""
	I0617 04:38:25.030871    8263 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:38:25.030879    8263 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:38:25.030909    8263 start.go:340] cluster config:
	{Name:force-systemd-flag-192000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-192000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:38:25.035724    8263 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:38:25.042778    8263 out.go:177] * Starting "force-systemd-flag-192000" primary control-plane node in "force-systemd-flag-192000" cluster
	I0617 04:38:25.045771    8263 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:38:25.045791    8263 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:38:25.045804    8263 cache.go:56] Caching tarball of preloaded images
	I0617 04:38:25.045874    8263 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:38:25.045881    8263 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:38:25.045962    8263 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/force-systemd-flag-192000/config.json ...
	I0617 04:38:25.045973    8263 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/force-systemd-flag-192000/config.json: {Name:mk70c036789dc0178f0df805d946fbcb682daa33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:38:25.046229    8263 start.go:360] acquireMachinesLock for force-systemd-flag-192000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:25.046269    8263 start.go:364] duration metric: took 31.042µs to acquireMachinesLock for "force-systemd-flag-192000"
	I0617 04:38:25.046281    8263 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-192000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:25.046319    8263 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:25.052640    8263 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:25.071691    8263 start.go:159] libmachine.API.Create for "force-systemd-flag-192000" (driver="qemu2")
	I0617 04:38:25.071728    8263 client.go:168] LocalClient.Create starting
	I0617 04:38:25.071807    8263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:25.071846    8263 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:25.071858    8263 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:25.071908    8263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:25.071933    8263 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:25.071945    8263 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:25.072339    8263 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:25.233095    8263 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:25.271646    8263 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:25.271653    8263 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:25.271828    8263 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2
	I0617 04:38:25.284108    8263 main.go:141] libmachine: STDOUT: 
	I0617 04:38:25.284128    8263 main.go:141] libmachine: STDERR: 
	I0617 04:38:25.284174    8263 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2 +20000M
	I0617 04:38:25.295268    8263 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:25.295292    8263 main.go:141] libmachine: STDERR: 
	I0617 04:38:25.295308    8263 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2
	I0617 04:38:25.295314    8263 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:25.295343    8263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:9b:c7:ab:25:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2
	I0617 04:38:25.297071    8263 main.go:141] libmachine: STDOUT: 
	I0617 04:38:25.297087    8263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:25.297106    8263 client.go:171] duration metric: took 225.373292ms to LocalClient.Create
	I0617 04:38:27.299352    8263 start.go:128] duration metric: took 2.252982334s to createHost
	I0617 04:38:27.299408    8263 start.go:83] releasing machines lock for "force-systemd-flag-192000", held for 2.253151042s
	W0617 04:38:27.299455    8263 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:27.311770    8263 out.go:177] * Deleting "force-systemd-flag-192000" in qemu2 ...
	W0617 04:38:27.343550    8263 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:27.343575    8263 start.go:728] Will try again in 5 seconds ...
	I0617 04:38:32.345735    8263 start.go:360] acquireMachinesLock for force-systemd-flag-192000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:32.777070    8263 start.go:364] duration metric: took 431.203083ms to acquireMachinesLock for "force-systemd-flag-192000"
	I0617 04:38:32.777193    8263 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-192000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:32.777489    8263 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:32.791981    8263 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:32.841323    8263 start.go:159] libmachine.API.Create for "force-systemd-flag-192000" (driver="qemu2")
	I0617 04:38:32.841371    8263 client.go:168] LocalClient.Create starting
	I0617 04:38:32.841473    8263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:32.841536    8263 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:32.841554    8263 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:32.841622    8263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:32.841664    8263 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:32.841679    8263 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:32.842220    8263 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:33.022631    8263 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:33.082951    8263 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:33.082960    8263 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:33.083158    8263 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2
	I0617 04:38:33.095696    8263 main.go:141] libmachine: STDOUT: 
	I0617 04:38:33.095718    8263 main.go:141] libmachine: STDERR: 
	I0617 04:38:33.095773    8263 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2 +20000M
	I0617 04:38:33.106532    8263 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:33.106548    8263 main.go:141] libmachine: STDERR: 
	I0617 04:38:33.106560    8263 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2
	I0617 04:38:33.106565    8263 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:33.106597    8263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:e0:6e:d2:41:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-flag-192000/disk.qcow2
	I0617 04:38:33.108322    8263 main.go:141] libmachine: STDOUT: 
	I0617 04:38:33.108341    8263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:33.108357    8263 client.go:171] duration metric: took 266.983333ms to LocalClient.Create
	I0617 04:38:35.110497    8263 start.go:128] duration metric: took 2.332997916s to createHost
	I0617 04:38:35.110565    8263 start.go:83] releasing machines lock for "force-systemd-flag-192000", held for 2.3334795s
	W0617 04:38:35.110980    8263 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-192000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-192000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:35.121505    8263 out.go:177] 
	W0617 04:38:35.125603    8263 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:38:35.125646    8263 out.go:239] * 
	* 
	W0617 04:38:35.128212    8263 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:38:35.138581    8263 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-192000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-192000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-192000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.611125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-192000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-192000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-192000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-06-17 04:38:35.231521 -0700 PDT m=+741.370418042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-192000 -n force-systemd-flag-192000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-192000 -n force-systemd-flag-192000: exit status 7 (34.81275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-192000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-192000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-192000
--- FAIL: TestForceSystemdFlag (10.44s)

                                                
                                    
x
+
TestForceSystemdEnv (10.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-389000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-389000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.005828542s)

                                                
                                                
-- stdout --
	* [force-systemd-env-389000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-389000" primary control-plane node in "force-systemd-env-389000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-389000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:38:20.172603    8241 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:38:20.172740    8241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:20.172743    8241 out.go:304] Setting ErrFile to fd 2...
	I0617 04:38:20.172745    8241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:38:20.172870    8241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:38:20.173896    8241 out.go:298] Setting JSON to false
	I0617 04:38:20.190406    8241 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4070,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:38:20.190476    8241 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:38:20.197543    8241 out.go:177] * [force-systemd-env-389000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:38:20.205416    8241 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:38:20.208604    8241 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:38:20.205515    8241 notify.go:220] Checking for updates...
	I0617 04:38:20.212995    8241 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:38:20.216494    8241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:38:20.219554    8241 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:38:20.222522    8241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0617 04:38:20.225838    8241 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:38:20.225880    8241 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:38:20.230498    8241 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:38:20.237492    8241 start.go:297] selected driver: qemu2
	I0617 04:38:20.237496    8241 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:38:20.237501    8241 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:38:20.239635    8241 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:38:20.242544    8241 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:38:20.245617    8241 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 04:38:20.245646    8241 cni.go:84] Creating CNI manager for ""
	I0617 04:38:20.245652    8241 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:38:20.245655    8241 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:38:20.245681    8241 start.go:340] cluster config:
	{Name:force-systemd-env-389000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:38:20.249764    8241 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:38:20.256514    8241 out.go:177] * Starting "force-systemd-env-389000" primary control-plane node in "force-systemd-env-389000" cluster
	I0617 04:38:20.259521    8241 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:38:20.259533    8241 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:38:20.259539    8241 cache.go:56] Caching tarball of preloaded images
	I0617 04:38:20.259588    8241 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:38:20.259593    8241 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:38:20.259648    8241 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/force-systemd-env-389000/config.json ...
	I0617 04:38:20.259658    8241 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/force-systemd-env-389000/config.json: {Name:mkf4203f8740630bf2b7eb694241a14a72dc0235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:38:20.259860    8241 start.go:360] acquireMachinesLock for force-systemd-env-389000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:20.259896    8241 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "force-systemd-env-389000"
	I0617 04:38:20.259907    8241 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:20.259934    8241 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:20.268454    8241 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:20.283486    8241 start.go:159] libmachine.API.Create for "force-systemd-env-389000" (driver="qemu2")
	I0617 04:38:20.283517    8241 client.go:168] LocalClient.Create starting
	I0617 04:38:20.283587    8241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:20.283615    8241 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:20.283625    8241 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:20.283671    8241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:20.283693    8241 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:20.283700    8241 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:20.284054    8241 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:20.426616    8241 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:20.614761    8241 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:20.614772    8241 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:20.614986    8241 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2
	I0617 04:38:20.628433    8241 main.go:141] libmachine: STDOUT: 
	I0617 04:38:20.628458    8241 main.go:141] libmachine: STDERR: 
	I0617 04:38:20.628528    8241 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2 +20000M
	I0617 04:38:20.640212    8241 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:20.640228    8241 main.go:141] libmachine: STDERR: 
	I0617 04:38:20.640242    8241 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2
	I0617 04:38:20.640255    8241 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:20.640284    8241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ad:84:9e:77:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2
	I0617 04:38:20.642088    8241 main.go:141] libmachine: STDOUT: 
	I0617 04:38:20.642105    8241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:20.642124    8241 client.go:171] duration metric: took 358.604417ms to LocalClient.Create
	I0617 04:38:22.644434    8241 start.go:128] duration metric: took 2.384471125s to createHost
	I0617 04:38:22.644531    8241 start.go:83] releasing machines lock for "force-systemd-env-389000", held for 2.384649584s
	W0617 04:38:22.644573    8241 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:22.653086    8241 out.go:177] * Deleting "force-systemd-env-389000" in qemu2 ...
	W0617 04:38:22.680386    8241 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:22.680419    8241 start.go:728] Will try again in 5 seconds ...
	I0617 04:38:27.682528    8241 start.go:360] acquireMachinesLock for force-systemd-env-389000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:38:27.683024    8241 start.go:364] duration metric: took 402.084µs to acquireMachinesLock for "force-systemd-env-389000"
	I0617 04:38:27.683188    8241 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:38:27.683478    8241 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:38:27.691897    8241 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0617 04:38:27.741904    8241 start.go:159] libmachine.API.Create for "force-systemd-env-389000" (driver="qemu2")
	I0617 04:38:27.741952    8241 client.go:168] LocalClient.Create starting
	I0617 04:38:27.742080    8241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:38:27.742152    8241 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:27.742168    8241 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:27.742232    8241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:38:27.742280    8241 main.go:141] libmachine: Decoding PEM data...
	I0617 04:38:27.742294    8241 main.go:141] libmachine: Parsing certificate...
	I0617 04:38:27.743525    8241 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:38:27.910765    8241 main.go:141] libmachine: Creating SSH key...
	I0617 04:38:28.075656    8241 main.go:141] libmachine: Creating Disk image...
	I0617 04:38:28.075662    8241 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:38:28.075853    8241 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2
	I0617 04:38:28.088562    8241 main.go:141] libmachine: STDOUT: 
	I0617 04:38:28.088579    8241 main.go:141] libmachine: STDERR: 
	I0617 04:38:28.088629    8241 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2 +20000M
	I0617 04:38:28.099553    8241 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:38:28.099571    8241 main.go:141] libmachine: STDERR: 
	I0617 04:38:28.099596    8241 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2
	I0617 04:38:28.099600    8241 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:38:28.099634    8241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:de:f4:b2:9f:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/force-systemd-env-389000/disk.qcow2
	I0617 04:38:28.101338    8241 main.go:141] libmachine: STDOUT: 
	I0617 04:38:28.101351    8241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:38:28.101364    8241 client.go:171] duration metric: took 359.409458ms to LocalClient.Create
	I0617 04:38:30.103514    8241 start.go:128] duration metric: took 2.42002975s to createHost
	I0617 04:38:30.103573    8241 start.go:83] releasing machines lock for "force-systemd-env-389000", held for 2.4205445s
	W0617 04:38:30.103963    8241 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:38:30.117454    8241 out.go:177] 
	W0617 04:38:30.121643    8241 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:38:30.121672    8241 out.go:239] * 
	* 
	W0617 04:38:30.124551    8241 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:38:30.133524    8241 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-389000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-389000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-389000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.438625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-389000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-389000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-389000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-06-17 04:38:30.229983 -0700 PDT m=+736.368828167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-389000 -n force-systemd-env-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-389000 -n force-systemd-env-389000: exit status 7 (35.498417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-389000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-389000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-389000
--- FAIL: TestForceSystemdEnv (10.23s)

                                                
                                    
x
+
TestErrorSpam/setup (9.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-533000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-533000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 --driver=qemu2 : exit status 80 (9.798492583s)

                                                
                                                
-- stdout --
	* [nospam-533000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-533000" primary control-plane node in "nospam-533000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-533000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-533000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-533000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-533000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19087
- KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-533000" primary control-plane node in "nospam-533000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-533000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.80s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.938067834s)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-296000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51087 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51087 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51087 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-296000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19087
- KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-296000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51087 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51087 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51087 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (68.83325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --alsologtostderr -v=8: exit status 80 (5.181092917s)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:27:26.451127    6796 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:27:26.451278    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:27:26.451282    6796 out.go:304] Setting ErrFile to fd 2...
	I0617 04:27:26.451284    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:27:26.451393    6796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:27:26.452455    6796 out.go:298] Setting JSON to false
	I0617 04:27:26.468554    6796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3416,"bootTime":1718620230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:27:26.468626    6796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:27:26.473746    6796 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:27:26.480756    6796 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:27:26.483614    6796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:27:26.480814    6796 notify.go:220] Checking for updates...
	I0617 04:27:26.489129    6796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:27:26.492707    6796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:27:26.495652    6796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:27:26.498737    6796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:27:26.501885    6796 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:27:26.501946    6796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:27:26.506687    6796 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:27:26.513607    6796 start.go:297] selected driver: qemu2
	I0617 04:27:26.513614    6796 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:27:26.513691    6796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:27:26.515830    6796 cni.go:84] Creating CNI manager for ""
	I0617 04:27:26.515846    6796 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:27:26.515894    6796 start.go:340] cluster config:
	{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:27:26.520314    6796 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:27:26.527632    6796 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	I0617 04:27:26.531641    6796 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:27:26.531656    6796 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:27:26.531663    6796 cache.go:56] Caching tarball of preloaded images
	I0617 04:27:26.531720    6796 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:27:26.531725    6796 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:27:26.531785    6796 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/functional-296000/config.json ...
	I0617 04:27:26.532261    6796 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:27:26.532291    6796 start.go:364] duration metric: took 23.375µs to acquireMachinesLock for "functional-296000"
	I0617 04:27:26.532301    6796 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:27:26.532306    6796 fix.go:54] fixHost starting: 
	I0617 04:27:26.532423    6796 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0617 04:27:26.532433    6796 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:27:26.535619    6796 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0617 04:27:26.543661    6796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
	I0617 04:27:26.545659    6796 main.go:141] libmachine: STDOUT: 
	I0617 04:27:26.545677    6796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:27:26.545710    6796 fix.go:56] duration metric: took 13.402167ms for fixHost
	I0617 04:27:26.545714    6796 start.go:83] releasing machines lock for "functional-296000", held for 13.418417ms
	W0617 04:27:26.545722    6796 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:27:26.545759    6796 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:27:26.545764    6796 start.go:728] Will try again in 5 seconds ...
	I0617 04:27:31.547873    6796 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:27:31.548289    6796 start.go:364] duration metric: took 318.458µs to acquireMachinesLock for "functional-296000"
	I0617 04:27:31.548438    6796 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:27:31.548463    6796 fix.go:54] fixHost starting: 
	I0617 04:27:31.549198    6796 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0617 04:27:31.549230    6796 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:27:31.552857    6796 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0617 04:27:31.556030    6796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
	I0617 04:27:31.566101    6796 main.go:141] libmachine: STDOUT: 
	I0617 04:27:31.566161    6796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:27:31.566259    6796 fix.go:56] duration metric: took 17.799625ms for fixHost
	I0617 04:27:31.566281    6796 start.go:83] releasing machines lock for "functional-296000", held for 17.968208ms
	W0617 04:27:31.566462    6796 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:27:31.572799    6796 out.go:177] 
	W0617 04:27:31.576867    6796 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:27:31.576899    6796 out.go:239] * 
	* 
	W0617 04:27:31.579349    6796 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:27:31.587690    6796 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-296000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.182871083s for "functional-296000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (67.946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.203875ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-296000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.600333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-296000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-296000 get po -A: exit status 1 (26.002709ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-296000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-296000\n"*: args "kubectl --context functional-296000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-296000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl images: exit status 83 (41.754417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (38.923292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-296000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.944833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.672291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 kubectl -- --context functional-296000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 kubectl -- --context functional-296000 get pods: exit status 1 (600.811083ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-296000
	* no server found for cluster "functional-296000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-296000 kubectl -- --context functional-296000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (31.443833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-296000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-296000 get pods: exit status 1 (928.26325ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-296000
	* no server found for cluster "functional-296000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-296000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (886.451875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.82s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.19745575s)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-296000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.198664375s for "functional-296000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (70.52975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-296000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-296000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.717875ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-296000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.429583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 logs: exit status 83 (88.700541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | -p download-only-246000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| delete  | -p download-only-246000                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| start   | -o=json --download-only                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | -p download-only-763000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| delete  | -p download-only-763000                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| delete  | -p download-only-246000                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| delete  | -p download-only-763000                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| start   | --download-only -p                                                       | binary-mirror-001000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | binary-mirror-001000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51054                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-001000                                                  | binary-mirror-001000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| addons  | enable dashboard -p                                                      | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | addons-585000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | addons-585000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-585000 --wait=true                                             | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-585000                                                         | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| start   | -p nospam-533000 -n=1 --memory=2250 --wait=false                         | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-533000                                                         | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
	| cache   | functional-296000 cache delete                                           | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	| ssh     | functional-296000 ssh sudo                                               | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-296000                                                        | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-296000 cache reload                                           | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-296000 kubectl --                                             | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | --context functional-296000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 04:27:38
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 04:27:38.967777    6880 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:27:38.967896    6880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:27:38.967902    6880 out.go:304] Setting ErrFile to fd 2...
	I0617 04:27:38.967904    6880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:27:38.968043    6880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:27:38.969234    6880 out.go:298] Setting JSON to false
	I0617 04:27:38.985488    6880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3428,"bootTime":1718620230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:27:38.985549    6880 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:27:38.991903    6880 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:27:38.998813    6880 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:27:38.998866    6880 notify.go:220] Checking for updates...
	I0617 04:27:39.006855    6880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:27:39.010795    6880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:27:39.013845    6880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:27:39.016861    6880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:27:39.019869    6880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:27:39.023128    6880 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:27:39.023183    6880 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:27:39.028293    6880 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:27:39.035760    6880 start.go:297] selected driver: qemu2
	I0617 04:27:39.035763    6880 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:27:39.035805    6880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:27:39.038008    6880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:27:39.038050    6880 cni.go:84] Creating CNI manager for ""
	I0617 04:27:39.038058    6880 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:27:39.038109    6880 start.go:340] cluster config:
	{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:27:39.042601    6880 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:27:39.050804    6880 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	I0617 04:27:39.054843    6880 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:27:39.054858    6880 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:27:39.054863    6880 cache.go:56] Caching tarball of preloaded images
	I0617 04:27:39.054932    6880 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:27:39.054936    6880 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:27:39.055005    6880 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/functional-296000/config.json ...
	I0617 04:27:39.055523    6880 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:27:39.055559    6880 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "functional-296000"
	I0617 04:27:39.055570    6880 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:27:39.055576    6880 fix.go:54] fixHost starting: 
	I0617 04:27:39.055699    6880 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0617 04:27:39.055707    6880 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:27:39.059781    6880 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0617 04:27:39.067619    6880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
	I0617 04:27:39.069748    6880 main.go:141] libmachine: STDOUT: 
	I0617 04:27:39.069763    6880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:27:39.069797    6880 fix.go:56] duration metric: took 14.219916ms for fixHost
	I0617 04:27:39.069801    6880 start.go:83] releasing machines lock for "functional-296000", held for 14.239334ms
	W0617 04:27:39.069806    6880 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:27:39.069864    6880 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:27:39.069869    6880 start.go:728] Will try again in 5 seconds ...
	I0617 04:27:44.072054    6880 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:27:44.072520    6880 start.go:364] duration metric: took 381.75µs to acquireMachinesLock for "functional-296000"
	I0617 04:27:44.072671    6880 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:27:44.072686    6880 fix.go:54] fixHost starting: 
	I0617 04:27:44.073426    6880 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0617 04:27:44.073444    6880 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:27:44.082885    6880 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0617 04:27:44.086293    6880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
	I0617 04:27:44.096045    6880 main.go:141] libmachine: STDOUT: 
	I0617 04:27:44.096089    6880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:27:44.096191    6880 fix.go:56] duration metric: took 23.508709ms for fixHost
	I0617 04:27:44.096205    6880 start.go:83] releasing machines lock for "functional-296000", held for 23.671916ms
	W0617 04:27:44.096392    6880 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:27:44.104091    6880 out.go:177] 
	W0617 04:27:44.108144    6880 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:27:44.108167    6880 out.go:239] * 
	W0617 04:27:44.110799    6880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:27:44.118097    6880 out.go:177] 
	
	
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-296000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | -p download-only-246000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-246000                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| start   | -o=json --download-only                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | -p download-only-763000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-763000                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-246000                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-763000                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| start   | --download-only -p                                                       | binary-mirror-001000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | binary-mirror-001000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51054                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-001000                                                  | binary-mirror-001000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| addons  | enable dashboard -p                                                      | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | addons-585000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | addons-585000                                                            |                      |         |         |                     |                     |
| start   | -p addons-585000 --wait=true                                             | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-585000                                                         | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| start   | -p nospam-533000 -n=1 --memory=2250 --wait=false                         | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-533000                                                         | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | functional-296000 cache delete                                           | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
| ssh     | functional-296000 ssh sudo                                               | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-296000                                                        | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache reload                                           | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-296000 kubectl --                                             | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --context functional-296000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/17 04:27:38
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0617 04:27:38.967777    6880 out.go:291] Setting OutFile to fd 1 ...
I0617 04:27:38.967896    6880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:27:38.967902    6880 out.go:304] Setting ErrFile to fd 2...
I0617 04:27:38.967904    6880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:27:38.968043    6880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:27:38.969234    6880 out.go:298] Setting JSON to false
I0617 04:27:38.985488    6880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3428,"bootTime":1718620230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0617 04:27:38.985549    6880 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0617 04:27:38.991903    6880 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0617 04:27:38.998813    6880 out.go:177]   - MINIKUBE_LOCATION=19087
I0617 04:27:38.998866    6880 notify.go:220] Checking for updates...
I0617 04:27:39.006855    6880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
I0617 04:27:39.010795    6880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0617 04:27:39.013845    6880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0617 04:27:39.016861    6880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
I0617 04:27:39.019869    6880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0617 04:27:39.023128    6880 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:27:39.023183    6880 driver.go:392] Setting default libvirt URI to qemu:///system
I0617 04:27:39.028293    6880 out.go:177] * Using the qemu2 driver based on existing profile
I0617 04:27:39.035760    6880 start.go:297] selected driver: qemu2
I0617 04:27:39.035763    6880 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0617 04:27:39.035805    6880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0617 04:27:39.038008    6880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0617 04:27:39.038050    6880 cni.go:84] Creating CNI manager for ""
I0617 04:27:39.038058    6880 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0617 04:27:39.038109    6880 start.go:340] cluster config:
{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0617 04:27:39.042601    6880 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0617 04:27:39.050804    6880 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
I0617 04:27:39.054843    6880 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0617 04:27:39.054858    6880 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0617 04:27:39.054863    6880 cache.go:56] Caching tarball of preloaded images
I0617 04:27:39.054932    6880 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0617 04:27:39.054936    6880 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0617 04:27:39.055005    6880 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/functional-296000/config.json ...
I0617 04:27:39.055523    6880 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0617 04:27:39.055559    6880 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "functional-296000"
I0617 04:27:39.055570    6880 start.go:96] Skipping create...Using existing machine configuration
I0617 04:27:39.055576    6880 fix.go:54] fixHost starting: 
I0617 04:27:39.055699    6880 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0617 04:27:39.055707    6880 fix.go:138] unexpected machine state, will restart: <nil>
I0617 04:27:39.059781    6880 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0617 04:27:39.067619    6880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
I0617 04:27:39.069748    6880 main.go:141] libmachine: STDOUT: 
I0617 04:27:39.069763    6880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0617 04:27:39.069797    6880 fix.go:56] duration metric: took 14.219916ms for fixHost
I0617 04:27:39.069801    6880 start.go:83] releasing machines lock for "functional-296000", held for 14.239334ms
W0617 04:27:39.069806    6880 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0617 04:27:39.069864    6880 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0617 04:27:39.069869    6880 start.go:728] Will try again in 5 seconds ...
I0617 04:27:44.072054    6880 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0617 04:27:44.072520    6880 start.go:364] duration metric: took 381.75µs to acquireMachinesLock for "functional-296000"
I0617 04:27:44.072671    6880 start.go:96] Skipping create...Using existing machine configuration
I0617 04:27:44.072686    6880 fix.go:54] fixHost starting: 
I0617 04:27:44.073426    6880 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0617 04:27:44.073444    6880 fix.go:138] unexpected machine state, will restart: <nil>
I0617 04:27:44.082885    6880 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0617 04:27:44.086293    6880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
I0617 04:27:44.096045    6880 main.go:141] libmachine: STDOUT: 
I0617 04:27:44.096089    6880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0617 04:27:44.096191    6880 fix.go:56] duration metric: took 23.508709ms for fixHost
I0617 04:27:44.096205    6880 start.go:83] releasing machines lock for "functional-296000", held for 23.671916ms
W0617 04:27:44.096392    6880 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0617 04:27:44.104091    6880 out.go:177] 
W0617 04:27:44.108144    6880 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0617 04:27:44.108167    6880 out.go:239] * 
W0617 04:27:44.110799    6880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0617 04:27:44.118097    6880 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3659182819/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | -p download-only-246000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-246000                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| start   | -o=json --download-only                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | -p download-only-763000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-763000                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-246000                                                  | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| delete  | -p download-only-763000                                                  | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| start   | --download-only -p                                                       | binary-mirror-001000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | binary-mirror-001000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51054                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-001000                                                  | binary-mirror-001000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| addons  | enable dashboard -p                                                      | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | addons-585000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | addons-585000                                                            |                      |         |         |                     |                     |
| start   | -p addons-585000 --wait=true                                             | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-585000                                                         | addons-585000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
| start   | -p nospam-533000 -n=1 --memory=2250 --wait=false                         | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-533000 --log_dir                                                  | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-533000                                                         | nospam-533000        | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | functional-296000 cache delete                                           | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
| ssh     | functional-296000 ssh sudo                                               | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-296000                                                        | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache reload                                           | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT | 17 Jun 24 04:27 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-296000 kubectl --                                             | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --context functional-296000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 17 Jun 24 04:27 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/17 04:27:38
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0617 04:27:38.967777    6880 out.go:291] Setting OutFile to fd 1 ...
I0617 04:27:38.967896    6880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:27:38.967902    6880 out.go:304] Setting ErrFile to fd 2...
I0617 04:27:38.967904    6880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:27:38.968043    6880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:27:38.969234    6880 out.go:298] Setting JSON to false
I0617 04:27:38.985488    6880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3428,"bootTime":1718620230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0617 04:27:38.985549    6880 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0617 04:27:38.991903    6880 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0617 04:27:38.998813    6880 out.go:177]   - MINIKUBE_LOCATION=19087
I0617 04:27:38.998866    6880 notify.go:220] Checking for updates...
I0617 04:27:39.006855    6880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
I0617 04:27:39.010795    6880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0617 04:27:39.013845    6880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0617 04:27:39.016861    6880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
I0617 04:27:39.019869    6880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0617 04:27:39.023128    6880 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:27:39.023183    6880 driver.go:392] Setting default libvirt URI to qemu:///system
I0617 04:27:39.028293    6880 out.go:177] * Using the qemu2 driver based on existing profile
I0617 04:27:39.035760    6880 start.go:297] selected driver: qemu2
I0617 04:27:39.035763    6880 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0617 04:27:39.035805    6880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0617 04:27:39.038008    6880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0617 04:27:39.038050    6880 cni.go:84] Creating CNI manager for ""
I0617 04:27:39.038058    6880 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0617 04:27:39.038109    6880 start.go:340] cluster config:
{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0617 04:27:39.042601    6880 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0617 04:27:39.050804    6880 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
I0617 04:27:39.054843    6880 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0617 04:27:39.054858    6880 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0617 04:27:39.054863    6880 cache.go:56] Caching tarball of preloaded images
I0617 04:27:39.054932    6880 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0617 04:27:39.054936    6880 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0617 04:27:39.055005    6880 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/functional-296000/config.json ...
I0617 04:27:39.055523    6880 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0617 04:27:39.055559    6880 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "functional-296000"
I0617 04:27:39.055570    6880 start.go:96] Skipping create...Using existing machine configuration
I0617 04:27:39.055576    6880 fix.go:54] fixHost starting: 
I0617 04:27:39.055699    6880 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0617 04:27:39.055707    6880 fix.go:138] unexpected machine state, will restart: <nil>
I0617 04:27:39.059781    6880 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0617 04:27:39.067619    6880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
I0617 04:27:39.069748    6880 main.go:141] libmachine: STDOUT: 
I0617 04:27:39.069763    6880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0617 04:27:39.069797    6880 fix.go:56] duration metric: took 14.219916ms for fixHost
I0617 04:27:39.069801    6880 start.go:83] releasing machines lock for "functional-296000", held for 14.239334ms
W0617 04:27:39.069806    6880 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0617 04:27:39.069864    6880 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0617 04:27:39.069869    6880 start.go:728] Will try again in 5 seconds ...
I0617 04:27:44.072054    6880 start.go:360] acquireMachinesLock for functional-296000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0617 04:27:44.072520    6880 start.go:364] duration metric: took 381.75µs to acquireMachinesLock for "functional-296000"
I0617 04:27:44.072671    6880 start.go:96] Skipping create...Using existing machine configuration
I0617 04:27:44.072686    6880 fix.go:54] fixHost starting: 
I0617 04:27:44.073426    6880 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0617 04:27:44.073444    6880 fix.go:138] unexpected machine state, will restart: <nil>
I0617 04:27:44.082885    6880 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0617 04:27:44.086293    6880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7f:2f:c1:3e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/functional-296000/disk.qcow2
I0617 04:27:44.096045    6880 main.go:141] libmachine: STDOUT: 
I0617 04:27:44.096089    6880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0617 04:27:44.096191    6880 fix.go:56] duration metric: took 23.508709ms for fixHost
I0617 04:27:44.096205    6880 start.go:83] releasing machines lock for "functional-296000", held for 23.671916ms
W0617 04:27:44.096392    6880 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0617 04:27:44.104091    6880 out.go:177] 
W0617 04:27:44.108144    6880 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0617 04:27:44.108167    6880 out.go:239] * 
W0617 04:27:44.110799    6880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0617 04:27:44.118097    6880 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-296000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-296000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.08825ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-296000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1] stderr:
I0617 04:28:31.998063    7210 out.go:291] Setting OutFile to fd 1 ...
I0617 04:28:31.998444    7210 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:31.998447    7210 out.go:304] Setting ErrFile to fd 2...
I0617 04:28:31.998450    7210 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:31.998607    7210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:28:31.998830    7210 mustload.go:65] Loading cluster: functional-296000
I0617 04:28:31.999009    7210 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:28:32.002542    7210 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
I0617 04:28:32.006456    7210 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (42.426542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 status: exit status 7 (30.29825ms)

                                                
                                                
-- stdout --
	functional-296000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-296000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (30.028083ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-296000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 status -o json: exit status 7 (30.140583ms)

                                                
                                                
-- stdout --
	{"Name":"functional-296000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-296000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.663584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-296000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-296000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.143875ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-296000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-296000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-296000 describe po hello-node-connect: exit status 1 (26.462917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-296000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-296000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-296000 logs -l app=hello-node-connect: exit status 1 (26.410666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-296000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-296000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-296000 describe svc hello-node-connect: exit status 1 (26.286333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-296000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.743416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-296000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.394667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "echo hello": exit status 83 (47.342208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n"*. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "cat /etc/hostname": exit status 83 (47.907209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-296000"- but got *"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n"*. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.255791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.406375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.6195ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-296000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-296000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cp functional-296000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4801270/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 cp functional-296000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4801270/001/cp-test.txt: exit status 83 (42.846125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 cp functional-296000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4801270/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.760625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4801270/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.648458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (40.059583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-296000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-296000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6540/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/test/nested/copy/6540/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/test/nested/copy/6540/hosts": exit status 83 (39.158542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/test/nested/copy/6540/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-296000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-296000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.482208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6540.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/6540.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/6540.pem": exit status 83 (46.141ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6540.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/6540.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6540.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6540.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/6540.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/6540.pem": exit status 83 (40.717959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6540.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /usr/share/ca-certificates/6540.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6540.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (38.541666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/65402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/65402.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/65402.pem": exit status 83 (47.261125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/65402.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/65402.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/65402.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/65402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/65402.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/65402.pem": exit status 83 (41.745917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/65402.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /usr/share/ca-certificates/65402.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/65402.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.827167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.218167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-296000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-296000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.805792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-296000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.880334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo systemctl is-active crio": exit status 83 (38.303166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 version -o=json --components: exit status 83 (40.8675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format short --alsologtostderr:
I0617 04:28:32.395749    7226 out.go:291] Setting OutFile to fd 1 ...
I0617 04:28:32.395897    7226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.395901    7226 out.go:304] Setting ErrFile to fd 2...
I0617 04:28:32.395902    7226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.396034    7226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:28:32.396413    7226 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:28:32.396472    7226 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format table --alsologtostderr:
I0617 04:28:32.613240    7238 out.go:291] Setting OutFile to fd 1 ...
I0617 04:28:32.613388    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.613391    7238 out.go:304] Setting ErrFile to fd 2...
I0617 04:28:32.613393    7238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.613541    7238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:28:32.613955    7238 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:28:32.614290    7238 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format json --alsologtostderr:
I0617 04:28:32.577540    7236 out.go:291] Setting OutFile to fd 1 ...
I0617 04:28:32.577706    7236 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.577709    7236 out.go:304] Setting ErrFile to fd 2...
I0617 04:28:32.577712    7236 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.577831    7236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:28:32.578258    7236 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:28:32.578321    7236 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format yaml --alsologtostderr:
I0617 04:28:32.431288    7228 out.go:291] Setting OutFile to fd 1 ...
I0617 04:28:32.431449    7228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.431452    7228 out.go:304] Setting ErrFile to fd 2...
I0617 04:28:32.431454    7228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.431577    7228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:28:32.432048    7228 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:28:32.432105    7228 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh pgrep buildkitd: exit status 83 (40.708209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image build -t localhost/my-image:functional-296000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image build -t localhost/my-image:functional-296000 testdata/build --alsologtostderr:
I0617 04:28:32.507626    7232 out.go:291] Setting OutFile to fd 1 ...
I0617 04:28:32.508025    7232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.508028    7232 out.go:304] Setting ErrFile to fd 2...
I0617 04:28:32.508031    7232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:28:32.508243    7232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:28:32.508644    7232 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:28:32.509072    7232 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:28:32.509294    7232 build_images.go:133] succeeded building to: 
I0617 04:28:32.509297    7232 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "localhost/my-image:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-296000 docker-env) && out/minikube-darwin-arm64 status -p functional-296000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-296000 docker-env) && out/minikube-darwin-arm64 status -p functional-296000": exit status 1 (42.64025ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2: exit status 83 (42.727833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:28:32.272416    7219 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:28:32.273191    7219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:32.273194    7219 out.go:304] Setting ErrFile to fd 2...
	I0617 04:28:32.273197    7219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:32.273360    7219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:28:32.273571    7219 mustload.go:65] Loading cluster: functional-296000
	I0617 04:28:32.273761    7219 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:28:32.278127    7219 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
	I0617 04:28:32.282239    7219 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2: exit status 83 (38.293625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:28:32.357917    7223 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:28:32.358073    7223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:32.358076    7223 out.go:304] Setting ErrFile to fd 2...
	I0617 04:28:32.358078    7223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:32.358208    7223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:28:32.358417    7223 mustload.go:65] Loading cluster: functional-296000
	I0617 04:28:32.358598    7223 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:28:32.360498    7223 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
	I0617 04:28:32.364283    7223 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2: exit status 83 (42.98125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:28:32.315992    7221 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:28:32.316187    7221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:32.316190    7221 out.go:304] Setting ErrFile to fd 2...
	I0617 04:28:32.316192    7221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:32.316328    7221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:28:32.316554    7221 mustload.go:65] Loading cluster: functional-296000
	I0617 04:28:32.316749    7221 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:28:32.321316    7221 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
	I0617 04:28:32.325343    7221 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-296000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-296000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.187208ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-296000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service list: exit status 83 (42.714834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-296000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service list -o json: exit status 83 (44.839208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-296000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service --namespace=default --https --url hello-node: exit status 83 (41.78875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-296000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service hello-node --url --format={{.IP}}: exit status 83 (46.786875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-296000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service hello-node --url: exit status 83 (41.932375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-296000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:1565: failed to parse "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"": parse "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0617 04:27:46.307595    7004 out.go:291] Setting OutFile to fd 1 ...
I0617 04:27:46.307894    7004 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:27:46.307899    7004 out.go:304] Setting ErrFile to fd 2...
I0617 04:27:46.307901    7004 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:27:46.308032    7004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:27:46.308289    7004 mustload.go:65] Loading cluster: functional-296000
I0617 04:27:46.308493    7004 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:27:46.313410    7004 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
I0617 04:27:46.324450    7004 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
stdout: * The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7003: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-296000": client config: context "functional-296000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (112.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-296000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-296000 get svc nginx-svc: exit status 1 (68.579125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-296000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (112.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr: (1.370728334s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr: (1.321254833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.160758584s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-296000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr: (1.180675375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image save gcr.io/google-containers/addon-resizer:functional-296000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.026006583s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-635000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-635000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.126764542s)

                                                
                                                
-- stdout --
	* [ha-635000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-635000" primary control-plane node in "ha-635000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-635000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:30:30.253469    7290 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:30:30.253623    7290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:30:30.253626    7290 out.go:304] Setting ErrFile to fd 2...
	I0617 04:30:30.253629    7290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:30:30.254069    7290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:30:30.255359    7290 out.go:298] Setting JSON to false
	I0617 04:30:30.272248    7290 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3600,"bootTime":1718620230,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:30:30.272313    7290 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:30:30.278798    7290 out.go:177] * [ha-635000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:30:30.286783    7290 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:30:30.286838    7290 notify.go:220] Checking for updates...
	I0617 04:30:30.290593    7290 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:30:30.293650    7290 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:30:30.296689    7290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:30:30.299616    7290 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:30:30.302666    7290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:30:30.305795    7290 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:30:30.308639    7290 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:30:30.315662    7290 start.go:297] selected driver: qemu2
	I0617 04:30:30.315670    7290 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:30:30.315676    7290 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:30:30.317932    7290 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:30:30.319457    7290 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:30:30.322789    7290 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:30:30.322825    7290 cni.go:84] Creating CNI manager for ""
	I0617 04:30:30.322829    7290 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0617 04:30:30.322833    7290 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 04:30:30.322865    7290 start.go:340] cluster config:
	{Name:ha-635000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:30:30.327146    7290 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:30:30.335591    7290 out.go:177] * Starting "ha-635000" primary control-plane node in "ha-635000" cluster
	I0617 04:30:30.339603    7290 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:30:30.339619    7290 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:30:30.339626    7290 cache.go:56] Caching tarball of preloaded images
	I0617 04:30:30.339694    7290 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:30:30.339700    7290 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:30:30.339941    7290 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/ha-635000/config.json ...
	I0617 04:30:30.339953    7290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/ha-635000/config.json: {Name:mk376d267cfcb4ebb92f6d6e879a14634df1c629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:30:30.340346    7290 start.go:360] acquireMachinesLock for ha-635000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:30:30.340380    7290 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "ha-635000"
	I0617 04:30:30.340392    7290 start.go:93] Provisioning new machine with config: &{Name:ha-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:30:30.340422    7290 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:30:30.348633    7290 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:30:30.365694    7290 start.go:159] libmachine.API.Create for "ha-635000" (driver="qemu2")
	I0617 04:30:30.365717    7290 client.go:168] LocalClient.Create starting
	I0617 04:30:30.365777    7290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:30:30.365805    7290 main.go:141] libmachine: Decoding PEM data...
	I0617 04:30:30.365821    7290 main.go:141] libmachine: Parsing certificate...
	I0617 04:30:30.365860    7290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:30:30.365882    7290 main.go:141] libmachine: Decoding PEM data...
	I0617 04:30:30.365893    7290 main.go:141] libmachine: Parsing certificate...
	I0617 04:30:30.366356    7290 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:30:30.515867    7290 main.go:141] libmachine: Creating SSH key...
	I0617 04:30:30.776302    7290 main.go:141] libmachine: Creating Disk image...
	I0617 04:30:30.776317    7290 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:30:30.776558    7290 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:30:30.789834    7290 main.go:141] libmachine: STDOUT: 
	I0617 04:30:30.789855    7290 main.go:141] libmachine: STDERR: 
	I0617 04:30:30.789912    7290 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2 +20000M
	I0617 04:30:30.801031    7290 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:30:30.801046    7290 main.go:141] libmachine: STDERR: 
	I0617 04:30:30.801062    7290 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:30:30.801069    7290 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:30:30.801099    7290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:91:f1:ae:2d:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:30:30.802737    7290 main.go:141] libmachine: STDOUT: 
	I0617 04:30:30.802750    7290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:30:30.802776    7290 client.go:171] duration metric: took 437.062333ms to LocalClient.Create
	I0617 04:30:32.804852    7290 start.go:128] duration metric: took 2.464489333s to createHost
	I0617 04:30:32.804893    7290 start.go:83] releasing machines lock for "ha-635000", held for 2.464579625s
	W0617 04:30:32.804931    7290 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:30:32.818433    7290 out.go:177] * Deleting "ha-635000" in qemu2 ...
	W0617 04:30:32.843373    7290 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:30:32.843410    7290 start.go:728] Will try again in 5 seconds ...
	I0617 04:30:37.844840    7290 start.go:360] acquireMachinesLock for ha-635000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:30:37.845348    7290 start.go:364] duration metric: took 389.209µs to acquireMachinesLock for "ha-635000"
	I0617 04:30:37.845476    7290 start.go:93] Provisioning new machine with config: &{Name:ha-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:30:37.845959    7290 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:30:37.858152    7290 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:30:37.907991    7290 start.go:159] libmachine.API.Create for "ha-635000" (driver="qemu2")
	I0617 04:30:37.908033    7290 client.go:168] LocalClient.Create starting
	I0617 04:30:37.908134    7290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:30:37.908194    7290 main.go:141] libmachine: Decoding PEM data...
	I0617 04:30:37.908210    7290 main.go:141] libmachine: Parsing certificate...
	I0617 04:30:37.908265    7290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:30:37.908308    7290 main.go:141] libmachine: Decoding PEM data...
	I0617 04:30:37.908327    7290 main.go:141] libmachine: Parsing certificate...
	I0617 04:30:37.908880    7290 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:30:38.066104    7290 main.go:141] libmachine: Creating SSH key...
	I0617 04:30:38.284610    7290 main.go:141] libmachine: Creating Disk image...
	I0617 04:30:38.284619    7290 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:30:38.284812    7290 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:30:38.297771    7290 main.go:141] libmachine: STDOUT: 
	I0617 04:30:38.297790    7290 main.go:141] libmachine: STDERR: 
	I0617 04:30:38.297840    7290 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2 +20000M
	I0617 04:30:38.308695    7290 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:30:38.308720    7290 main.go:141] libmachine: STDERR: 
	I0617 04:30:38.308734    7290 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:30:38.308738    7290 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:30:38.308777    7290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:80:f8:6a:48:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:30:38.310497    7290 main.go:141] libmachine: STDOUT: 
	I0617 04:30:38.310516    7290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:30:38.310530    7290 client.go:171] duration metric: took 402.502916ms to LocalClient.Create
	I0617 04:30:40.312644    7290 start.go:128] duration metric: took 2.466729167s to createHost
	I0617 04:30:40.312697    7290 start.go:83] releasing machines lock for "ha-635000", held for 2.46739825s
	W0617 04:30:40.313121    7290 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:30:40.322508    7290 out.go:177] 
	W0617 04:30:40.326626    7290 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:30:40.326652    7290 out.go:239] * 
	* 
	W0617 04:30:40.329295    7290 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:30:40.337376    7290 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-635000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (67.307291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (115.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.041917ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-635000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- rollout status deployment/busybox: exit status 1 (56.803041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.305292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.33425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.156208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.492666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.857875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.491042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.061709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.914ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.4345ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.602208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.310542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.255792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.662833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.02825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.90225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.70075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (115.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-635000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.407208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-635000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.827333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-635000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-635000 -v=7 --alsologtostderr: exit status 83 (42.572959ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-635000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-635000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:35.794039    7386 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:35.794447    7386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:35.794451    7386 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:35.794453    7386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:35.794611    7386 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:35.794830    7386 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:35.795002    7386 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:35.800016    7386 out.go:177] * The control-plane node ha-635000 host is not running: state=Stopped
	I0617 04:32:35.804003    7386 out.go:177]   To start a cluster, run: "minikube start -p ha-635000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-635000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.416167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-635000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-635000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.616125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-635000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-635000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-635000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.906375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-635000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-635000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (30.216042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status --output json -v=7 --alsologtostderr: exit status 7 (29.944917ms)

                                                
                                                
-- stdout --
	{"Name":"ha-635000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:36.024227    7399 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:36.024395    7399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.024398    7399 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:36.024400    7399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.024516    7399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:36.024642    7399 out.go:298] Setting JSON to true
	I0617 04:32:36.024651    7399 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:36.024723    7399 notify.go:220] Checking for updates...
	I0617 04:32:36.024859    7399 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:36.024866    7399 status.go:255] checking status of ha-635000 ...
	I0617 04:32:36.025072    7399 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:36.025076    7399 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:36.025078    7399 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-635000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.335334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:36.083982    7403 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:36.084559    7403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.084562    7403 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:36.084565    7403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.084730    7403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:36.084959    7403 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:36.085150    7403 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:36.089434    7403 out.go:177] 
	W0617 04:32:36.092367    7403 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0617 04:32:36.092371    7403 out.go:239] * 
	* 
	W0617 04:32:36.094228    7403 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:32:36.098358    7403 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-635000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (30.435166ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:36.131009    7405 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:36.131175    7405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.131178    7405 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:36.131180    7405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.131315    7405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:36.131427    7405 out.go:298] Setting JSON to false
	I0617 04:32:36.131440    7405 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:36.131498    7405 notify.go:220] Checking for updates...
	I0617 04:32:36.131678    7405 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:36.131684    7405 status.go:255] checking status of ha-635000 ...
	I0617 04:32:36.131888    7405 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:36.131892    7405 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:36.131897    7405 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr": ha-635000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr": ha-635000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr": ha-635000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr": ha-635000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.146541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-635000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.973166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.496542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:36.292161    7415 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:36.292568    7415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.292572    7415 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:36.292575    7415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.292748    7415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:36.292960    7415 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:36.293144    7415 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:36.297388    7415 out.go:177] 
	W0617 04:32:36.301279    7415 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0617 04:32:36.301287    7415 out.go:239] * 
	* 
	W0617 04:32:36.303332    7415 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:32:36.307355    7415 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0617 04:32:36.292161    7415 out.go:291] Setting OutFile to fd 1 ...
I0617 04:32:36.292568    7415 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:32:36.292572    7415 out.go:304] Setting ErrFile to fd 2...
I0617 04:32:36.292575    7415 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:32:36.292748    7415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:32:36.292960    7415 mustload.go:65] Loading cluster: ha-635000
I0617 04:32:36.293144    7415 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:32:36.297388    7415 out.go:177] 
W0617 04:32:36.301279    7415 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0617 04:32:36.301287    7415 out.go:239] * 
* 
W0617 04:32:36.303332    7415 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0617 04:32:36.307355    7415 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-635000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (30.2275ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:36.339925    7417 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:36.340087    7417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.340090    7417 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:36.340092    7417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:36.340239    7417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:36.340356    7417 out.go:298] Setting JSON to false
	I0617 04:32:36.340366    7417 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:36.340432    7417 notify.go:220] Checking for updates...
	I0617 04:32:36.340555    7417 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:36.340561    7417 status.go:255] checking status of ha-635000 ...
	I0617 04:32:36.340758    7417 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:36.340762    7417 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:36.340764    7417 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (75.730708ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:37.414734    7422 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:37.414977    7422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:37.414982    7422 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:37.414986    7422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:37.415188    7422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:37.415367    7422 out.go:298] Setting JSON to false
	I0617 04:32:37.415387    7422 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:37.415427    7422 notify.go:220] Checking for updates...
	I0617 04:32:37.415701    7422 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:37.415712    7422 status.go:255] checking status of ha-635000 ...
	I0617 04:32:37.416014    7422 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:37.416020    7422 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:37.416022    7422 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (72.672375ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:39.081889    7424 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:39.082099    7424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:39.082103    7424 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:39.082106    7424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:39.082272    7424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:39.082443    7424 out.go:298] Setting JSON to false
	I0617 04:32:39.082456    7424 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:39.082500    7424 notify.go:220] Checking for updates...
	I0617 04:32:39.082729    7424 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:39.082737    7424 status.go:255] checking status of ha-635000 ...
	I0617 04:32:39.083004    7424 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:39.083009    7424 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:39.083012    7424 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (75.201208ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:41.201348    7428 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:41.201535    7428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:41.201539    7428 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:41.201542    7428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:41.201718    7428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:41.201885    7428 out.go:298] Setting JSON to false
	I0617 04:32:41.201898    7428 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:41.201935    7428 notify.go:220] Checking for updates...
	I0617 04:32:41.202164    7428 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:41.202172    7428 status.go:255] checking status of ha-635000 ...
	I0617 04:32:41.202441    7428 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:41.202445    7428 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:41.202448    7428 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (76.707208ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:45.988469    7431 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:45.988680    7431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:45.988684    7431 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:45.988687    7431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:45.988901    7431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:45.989071    7431 out.go:298] Setting JSON to false
	I0617 04:32:45.989089    7431 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:45.989131    7431 notify.go:220] Checking for updates...
	I0617 04:32:45.989348    7431 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:45.989357    7431 status.go:255] checking status of ha-635000 ...
	I0617 04:32:45.989643    7431 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:45.989648    7431 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:45.989651    7431 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (76.141625ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:49.435855    7436 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:49.436074    7436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:49.436082    7436 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:49.436086    7436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:49.436259    7436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:49.436404    7436 out.go:298] Setting JSON to false
	I0617 04:32:49.436417    7436 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:49.436453    7436 notify.go:220] Checking for updates...
	I0617 04:32:49.436659    7436 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:49.436666    7436 status.go:255] checking status of ha-635000 ...
	I0617 04:32:49.436933    7436 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:49.436938    7436 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:49.436941    7436 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (74.245167ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:32:59.032083    7440 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:32:59.032333    7440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:59.032340    7440 out.go:304] Setting ErrFile to fd 2...
	I0617 04:32:59.032343    7440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:32:59.032494    7440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:32:59.032655    7440 out.go:298] Setting JSON to false
	I0617 04:32:59.032668    7440 mustload.go:65] Loading cluster: ha-635000
	I0617 04:32:59.032703    7440 notify.go:220] Checking for updates...
	I0617 04:32:59.032923    7440 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:32:59.032932    7440 status.go:255] checking status of ha-635000 ...
	I0617 04:32:59.033252    7440 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:32:59.033257    7440 status.go:343] host is not running, skipping remaining checks
	I0617 04:32:59.033260    7440 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (72.638667ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:05.127640    7443 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:05.127833    7443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:05.127837    7443 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:05.127840    7443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:05.128013    7443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:05.128195    7443 out.go:298] Setting JSON to false
	I0617 04:33:05.128210    7443 mustload.go:65] Loading cluster: ha-635000
	I0617 04:33:05.128255    7443 notify.go:220] Checking for updates...
	I0617 04:33:05.128525    7443 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:05.128534    7443 status.go:255] checking status of ha-635000 ...
	I0617 04:33:05.128845    7443 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:33:05.128851    7443 status.go:343] host is not running, skipping remaining checks
	I0617 04:33:05.128854    7443 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (73.6845ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:19.703230    7451 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:19.703456    7451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:19.703465    7451 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:19.703468    7451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:19.703638    7451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:19.703794    7451 out.go:298] Setting JSON to false
	I0617 04:33:19.703807    7451 mustload.go:65] Loading cluster: ha-635000
	I0617 04:33:19.703850    7451 notify.go:220] Checking for updates...
	I0617 04:33:19.704070    7451 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:19.704078    7451 status.go:255] checking status of ha-635000 ...
	I0617 04:33:19.704375    7451 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:33:19.704380    7451 status.go:343] host is not running, skipping remaining checks
	I0617 04:33:19.704383    7451 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (72.813ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:35.640950    7455 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:35.641143    7455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:35.641147    7455 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:35.641150    7455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:35.641324    7455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:35.641484    7455 out.go:298] Setting JSON to false
	I0617 04:33:35.641497    7455 mustload.go:65] Loading cluster: ha-635000
	I0617 04:33:35.641523    7455 notify.go:220] Checking for updates...
	I0617 04:33:35.641736    7455 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:35.641745    7455 status.go:255] checking status of ha-635000 ...
	I0617 04:33:35.642025    7455 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:33:35.642031    7455 status.go:343] host is not running, skipping remaining checks
	I0617 04:33:35.642034    7455 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (33.160208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-635000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-635000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (30.335542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-635000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-635000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-635000 -v=7 --alsologtostderr: (2.129055875s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-635000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-635000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.22343525s)

                                                
                                                
-- stdout --
	* [ha-635000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-635000" primary control-plane node in "ha-635000" cluster
	* Restarting existing qemu2 VM for "ha-635000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-635000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:38.002920    7482 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:38.003095    7482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:38.003100    7482 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:38.003103    7482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:38.003275    7482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:38.004501    7482 out.go:298] Setting JSON to false
	I0617 04:33:38.023662    7482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3788,"bootTime":1718620230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:33:38.023731    7482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:33:38.027507    7482 out.go:177] * [ha-635000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:33:38.035474    7482 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:33:38.039486    7482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:33:38.035501    7482 notify.go:220] Checking for updates...
	I0617 04:33:38.046394    7482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:33:38.049453    7482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:33:38.052516    7482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:33:38.055440    7482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:33:38.058854    7482 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:38.058914    7482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:33:38.063495    7482 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:33:38.070435    7482 start.go:297] selected driver: qemu2
	I0617 04:33:38.070440    7482 start.go:901] validating driver "qemu2" against &{Name:ha-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:33:38.070522    7482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:33:38.072807    7482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:33:38.072849    7482 cni.go:84] Creating CNI manager for ""
	I0617 04:33:38.072855    7482 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 04:33:38.072897    7482 start.go:340] cluster config:
	{Name:ha-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-635000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:33:38.077436    7482 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:33:38.084423    7482 out.go:177] * Starting "ha-635000" primary control-plane node in "ha-635000" cluster
	I0617 04:33:38.088438    7482 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:33:38.088451    7482 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:33:38.088460    7482 cache.go:56] Caching tarball of preloaded images
	I0617 04:33:38.088515    7482 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:33:38.088521    7482 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:33:38.088584    7482 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/ha-635000/config.json ...
	I0617 04:33:38.089053    7482 start.go:360] acquireMachinesLock for ha-635000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:33:38.089090    7482 start.go:364] duration metric: took 30.917µs to acquireMachinesLock for "ha-635000"
	I0617 04:33:38.089099    7482 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:33:38.089107    7482 fix.go:54] fixHost starting: 
	I0617 04:33:38.089231    7482 fix.go:112] recreateIfNeeded on ha-635000: state=Stopped err=<nil>
	W0617 04:33:38.089242    7482 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:33:38.097470    7482 out.go:177] * Restarting existing qemu2 VM for "ha-635000" ...
	I0617 04:33:38.101460    7482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:80:f8:6a:48:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:33:38.103572    7482 main.go:141] libmachine: STDOUT: 
	I0617 04:33:38.103596    7482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:33:38.103629    7482 fix.go:56] duration metric: took 14.521833ms for fixHost
	I0617 04:33:38.103635    7482 start.go:83] releasing machines lock for "ha-635000", held for 14.540709ms
	W0617 04:33:38.103643    7482 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:33:38.103687    7482 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:33:38.103692    7482 start.go:728] Will try again in 5 seconds ...
	I0617 04:33:43.103890    7482 start.go:360] acquireMachinesLock for ha-635000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:33:43.104257    7482 start.go:364] duration metric: took 277.916µs to acquireMachinesLock for "ha-635000"
	I0617 04:33:43.104398    7482 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:33:43.104418    7482 fix.go:54] fixHost starting: 
	I0617 04:33:43.105137    7482 fix.go:112] recreateIfNeeded on ha-635000: state=Stopped err=<nil>
	W0617 04:33:43.105168    7482 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:33:43.109785    7482 out.go:177] * Restarting existing qemu2 VM for "ha-635000" ...
	I0617 04:33:43.116931    7482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:80:f8:6a:48:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:33:43.127000    7482 main.go:141] libmachine: STDOUT: 
	I0617 04:33:43.127057    7482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:33:43.127134    7482 fix.go:56] duration metric: took 22.718542ms for fixHost
	I0617 04:33:43.127153    7482 start.go:83] releasing machines lock for "ha-635000", held for 22.877125ms
	W0617 04:33:43.127364    7482 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-635000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-635000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:33:43.134640    7482 out.go:177] 
	W0617 04:33:43.137696    7482 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:33:43.137729    7482 out.go:239] * 
	* 
	W0617 04:33:43.140206    7482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:33:43.148660    7482 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-635000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-635000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (32.948459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.373292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-635000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-635000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:43.292038    7494 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:43.292455    7494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:43.292459    7494 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:43.292461    7494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:43.292647    7494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:43.292856    7494 mustload.go:65] Loading cluster: ha-635000
	I0617 04:33:43.293036    7494 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:43.296044    7494 out.go:177] * The control-plane node ha-635000 host is not running: state=Stopped
	I0617 04:33:43.298993    7494 out.go:177]   To start a cluster, run: "minikube start -p ha-635000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-635000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (30.154166ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:43.331362    7496 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:43.331494    7496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:43.331497    7496 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:43.331499    7496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:43.331658    7496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:43.331776    7496 out.go:298] Setting JSON to false
	I0617 04:33:43.331786    7496 mustload.go:65] Loading cluster: ha-635000
	I0617 04:33:43.331851    7496 notify.go:220] Checking for updates...
	I0617 04:33:43.331978    7496 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:43.331984    7496 status.go:255] checking status of ha-635000 ...
	I0617 04:33:43.332190    7496 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:33:43.332193    7496 status.go:343] host is not running, skipping remaining checks
	I0617 04:33:43.332196    7496 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (30.026459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-635000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (30.573334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-635000 stop -v=7 --alsologtostderr: (3.412292584s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr: exit status 7 (69.78125ms)

                                                
                                                
-- stdout --
	ha-635000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:46.945716    7524 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:46.945907    7524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:46.945912    7524 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:46.945914    7524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:46.946427    7524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:46.946662    7524 out.go:298] Setting JSON to false
	I0617 04:33:46.946683    7524 mustload.go:65] Loading cluster: ha-635000
	I0617 04:33:46.946839    7524 notify.go:220] Checking for updates...
	I0617 04:33:46.947243    7524 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:46.947256    7524 status.go:255] checking status of ha-635000 ...
	I0617 04:33:46.947515    7524 status.go:330] ha-635000 host status = "Stopped" (err=<nil>)
	I0617 04:33:46.947520    7524 status.go:343] host is not running, skipping remaining checks
	I0617 04:33:46.947523    7524 status.go:257] ha-635000 status: &{Name:ha-635000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr": ha-635000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr": ha-635000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-635000 status -v=7 --alsologtostderr": ha-635000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (32.659625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-635000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-635000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.1895155s)

                                                
                                                
-- stdout --
	* [ha-635000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-635000" primary control-plane node in "ha-635000" cluster
	* Restarting existing qemu2 VM for "ha-635000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-635000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:47.009452    7528 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:47.009582    7528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:47.009585    7528 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:47.009588    7528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:47.009732    7528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:47.010680    7528 out.go:298] Setting JSON to false
	I0617 04:33:47.026818    7528 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3797,"bootTime":1718620230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:33:47.026878    7528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:33:47.032605    7528 out.go:177] * [ha-635000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:33:47.040532    7528 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:33:47.040615    7528 notify.go:220] Checking for updates...
	I0617 04:33:47.047418    7528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:33:47.050439    7528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:33:47.053489    7528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:33:47.056510    7528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:33:47.059450    7528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:33:47.062746    7528 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:47.063015    7528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:33:47.067428    7528 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:33:47.074517    7528 start.go:297] selected driver: qemu2
	I0617 04:33:47.074523    7528 start.go:901] validating driver "qemu2" against &{Name:ha-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-635000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:33:47.074586    7528 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:33:47.076879    7528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:33:47.076922    7528 cni.go:84] Creating CNI manager for ""
	I0617 04:33:47.076927    7528 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 04:33:47.076983    7528 start.go:340] cluster config:
	{Name:ha-635000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-635000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:33:47.081370    7528 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:33:47.089425    7528 out.go:177] * Starting "ha-635000" primary control-plane node in "ha-635000" cluster
	I0617 04:33:47.092413    7528 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:33:47.092431    7528 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:33:47.092440    7528 cache.go:56] Caching tarball of preloaded images
	I0617 04:33:47.092497    7528 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:33:47.092502    7528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:33:47.092578    7528 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/ha-635000/config.json ...
	I0617 04:33:47.092996    7528 start.go:360] acquireMachinesLock for ha-635000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:33:47.093024    7528 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "ha-635000"
	I0617 04:33:47.093033    7528 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:33:47.093039    7528 fix.go:54] fixHost starting: 
	I0617 04:33:47.093155    7528 fix.go:112] recreateIfNeeded on ha-635000: state=Stopped err=<nil>
	W0617 04:33:47.093164    7528 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:33:47.101240    7528 out.go:177] * Restarting existing qemu2 VM for "ha-635000" ...
	I0617 04:33:47.105443    7528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:80:f8:6a:48:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:33:47.107440    7528 main.go:141] libmachine: STDOUT: 
	I0617 04:33:47.107457    7528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:33:47.107488    7528 fix.go:56] duration metric: took 14.447708ms for fixHost
	I0617 04:33:47.107494    7528 start.go:83] releasing machines lock for "ha-635000", held for 14.465792ms
	W0617 04:33:47.107502    7528 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:33:47.107533    7528 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:33:47.107539    7528 start.go:728] Will try again in 5 seconds ...
	I0617 04:33:52.109695    7528 start.go:360] acquireMachinesLock for ha-635000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:33:52.110174    7528 start.go:364] duration metric: took 357.291µs to acquireMachinesLock for "ha-635000"
	I0617 04:33:52.110441    7528 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:33:52.110466    7528 fix.go:54] fixHost starting: 
	I0617 04:33:52.111196    7528 fix.go:112] recreateIfNeeded on ha-635000: state=Stopped err=<nil>
	W0617 04:33:52.111227    7528 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:33:52.119804    7528 out.go:177] * Restarting existing qemu2 VM for "ha-635000" ...
	I0617 04:33:52.123921    7528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:80:f8:6a:48:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/ha-635000/disk.qcow2
	I0617 04:33:52.133685    7528 main.go:141] libmachine: STDOUT: 
	I0617 04:33:52.133759    7528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:33:52.133844    7528 fix.go:56] duration metric: took 23.380125ms for fixHost
	I0617 04:33:52.133868    7528 start.go:83] releasing machines lock for "ha-635000", held for 23.667792ms
	W0617 04:33:52.134064    7528 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-635000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-635000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:33:52.142787    7528 out.go:177] 
	W0617 04:33:52.146880    7528 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:33:52.146905    7528 out.go:239] * 
	* 
	W0617 04:33:52.149422    7528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:33:52.157801    7528 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-635000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (68.877083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-635000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (30.025416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-635000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-635000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.549708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-635000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-635000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:33:52.372582    7547 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:33:52.372983    7547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:52.372988    7547 out.go:304] Setting ErrFile to fd 2...
	I0617 04:33:52.372990    7547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:33:52.373187    7547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:33:52.373482    7547 mustload.go:65] Loading cluster: ha-635000
	I0617 04:33:52.373839    7547 config.go:182] Loaded profile config "ha-635000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:33:52.378574    7547 out.go:177] * The control-plane node ha-635000 host is not running: state=Stopped
	I0617 04:33:52.382489    7547 out.go:177]   To start a cluster, run: "minikube start -p ha-635000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-635000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (29.314625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-635000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-635000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-635000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-635000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-635000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-635000 -n ha-635000: exit status 7 (30.096583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-224000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-224000 --driver=qemu2 : exit status 80 (9.7434195s)

                                                
                                                
-- stdout --
	* [image-224000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-224000" primary control-plane node in "image-224000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-224000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-224000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-224000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-224000 -n image-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-224000 -n image-224000: exit status 7 (68.4405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-224000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-311000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-311000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.716416458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"03ea6f2e-c1d1-4c44-ab37-b2ac5dbc9384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-311000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a315e6f-2a39-4d19-b994-d3a19c30c383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19087"}}
	{"specversion":"1.0","id":"905037b9-b8a2-425a-8924-58f9f327b041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig"}}
	{"specversion":"1.0","id":"b1a4df96-cf6e-4a4b-bb6b-4b00abb347ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8f95ed19-4e89-49e1-a768-88902fea21d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2005c17-02e6-47b9-9aac-d6f2a0493a41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube"}}
	{"specversion":"1.0","id":"c89d5d80-a709-44a1-9583-a7e5e22b5be2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"10ca28bc-8a60-4b63-b12d-8936e8e19ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"42cce694-e0f2-4279-a165-f1382a9b686f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"04ddddcb-7f20-437d-85ae-21336631733e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-311000\" primary control-plane node in \"json-output-311000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2f24ff6-bd89-4ee0-9ad6-45be206ec197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c47b5176-3e78-4f70-a8e5-e3959e9f1360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-311000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"d06a6903-4908-411c-b1b7-b59090160eea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"012fd2bc-2876-493e-a658-ba2aa58dbb04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"00817fe6-32aa-4a98-b64a-bbe7e5a89657","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-311000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"caace056-cd3e-4aa1-bf93-d8145bf41903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"287b5c1e-d07a-4130-acec-75746b81dd51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-311000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-311000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-311000 --output=json --user=testUser: exit status 83 (77.0825ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a97b382-c684-4d4d-9f9a-d514d01ea250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-311000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"dc865c9c-684f-4fc4-a22a-7389bef50b31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-311000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-311000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-311000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-311000 --output=json --user=testUser: exit status 83 (45.437958ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-311000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-311000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-311000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-311000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-989000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-989000 --driver=qemu2 : exit status 80 (9.822562042s)

                                                
                                                
-- stdout --
	* [first-989000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-989000" primary control-plane node in "first-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-989000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-17 04:34:25.9029 -0700 PDT m=+492.122451667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-991000 -n second-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-991000 -n second-991000: exit status 85 (79.76975ms)

                                                
                                                
-- stdout --
	* Profile "second-991000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-991000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-991000" host is not running, skipping log retrieval (state="* Profile \"second-991000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-991000\"")
helpers_test.go:175: Cleaning up "second-991000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-991000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-17 04:34:26.211429 -0700 PDT m=+492.430989501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-989000 -n first-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-989000 -n first-989000: exit status 7 (30.88375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-989000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-989000
--- FAIL: TestMinikubeProfile (10.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-462000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-462000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.061985s)

                                                
                                                
-- stdout --
	* [mount-start-1-462000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-462000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-462000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-462000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-462000 -n mount-start-1-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-462000 -n mount-start-1-462000: exit status 7 (67.121125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-812000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-812000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.907768667s)

                                                
                                                
-- stdout --
	* [multinode-812000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-812000" primary control-plane node in "multinode-812000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-812000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:34:36.830969    7714 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:34:36.831093    7714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:34:36.831096    7714 out.go:304] Setting ErrFile to fd 2...
	I0617 04:34:36.831098    7714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:34:36.831222    7714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:34:36.832297    7714 out.go:298] Setting JSON to false
	I0617 04:34:36.848496    7714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3846,"bootTime":1718620230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:34:36.848565    7714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:34:36.854830    7714 out.go:177] * [multinode-812000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:34:36.867843    7714 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:34:36.862875    7714 notify.go:220] Checking for updates...
	I0617 04:34:36.875848    7714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:34:36.884794    7714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:34:36.892893    7714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:34:36.901835    7714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:34:36.910899    7714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:34:36.916206    7714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:34:36.920828    7714 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:34:36.927830    7714 start.go:297] selected driver: qemu2
	I0617 04:34:36.927838    7714 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:34:36.927845    7714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:34:36.930649    7714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:34:36.933976    7714 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:34:36.937953    7714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:34:36.938006    7714 cni.go:84] Creating CNI manager for ""
	I0617 04:34:36.938012    7714 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0617 04:34:36.938018    7714 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 04:34:36.938056    7714 start.go:340] cluster config:
	{Name:multinode-812000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:34:36.944060    7714 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:34:36.951823    7714 out.go:177] * Starting "multinode-812000" primary control-plane node in "multinode-812000" cluster
	I0617 04:34:36.955816    7714 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:34:36.955836    7714 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:34:36.955852    7714 cache.go:56] Caching tarball of preloaded images
	I0617 04:34:36.955934    7714 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:34:36.955941    7714 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:34:36.956210    7714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/multinode-812000/config.json ...
	I0617 04:34:36.956224    7714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/multinode-812000/config.json: {Name:mke7f2c6488cb210946dfc26c2c7102669db8029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:34:36.956504    7714 start.go:360] acquireMachinesLock for multinode-812000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:34:36.956557    7714 start.go:364] duration metric: took 45.958µs to acquireMachinesLock for "multinode-812000"
	I0617 04:34:36.956571    7714 start.go:93] Provisioning new machine with config: &{Name:multinode-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:34:36.956605    7714 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:34:36.964886    7714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:34:36.986077    7714 start.go:159] libmachine.API.Create for "multinode-812000" (driver="qemu2")
	I0617 04:34:36.986100    7714 client.go:168] LocalClient.Create starting
	I0617 04:34:36.986160    7714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:34:36.986191    7714 main.go:141] libmachine: Decoding PEM data...
	I0617 04:34:36.986206    7714 main.go:141] libmachine: Parsing certificate...
	I0617 04:34:36.986237    7714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:34:36.986270    7714 main.go:141] libmachine: Decoding PEM data...
	I0617 04:34:36.986283    7714 main.go:141] libmachine: Parsing certificate...
	I0617 04:34:36.986662    7714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:34:37.131924    7714 main.go:141] libmachine: Creating SSH key...
	I0617 04:34:37.231686    7714 main.go:141] libmachine: Creating Disk image...
	I0617 04:34:37.231694    7714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:34:37.231857    7714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:34:37.244655    7714 main.go:141] libmachine: STDOUT: 
	I0617 04:34:37.244675    7714 main.go:141] libmachine: STDERR: 
	I0617 04:34:37.244720    7714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2 +20000M
	I0617 04:34:37.255672    7714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:34:37.255686    7714 main.go:141] libmachine: STDERR: 
	I0617 04:34:37.255698    7714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:34:37.255701    7714 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:34:37.255742    7714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e7:7a:51:ea:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:34:37.257404    7714 main.go:141] libmachine: STDOUT: 
	I0617 04:34:37.257417    7714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:34:37.257435    7714 client.go:171] duration metric: took 271.335291ms to LocalClient.Create
	I0617 04:34:39.259718    7714 start.go:128] duration metric: took 2.303114083s to createHost
	I0617 04:34:39.259893    7714 start.go:83] releasing machines lock for "multinode-812000", held for 2.303392583s
	W0617 04:34:39.259949    7714 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:34:39.270536    7714 out.go:177] * Deleting "multinode-812000" in qemu2 ...
	W0617 04:34:39.307108    7714 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:34:39.307233    7714 start.go:728] Will try again in 5 seconds ...
	I0617 04:34:44.309252    7714 start.go:360] acquireMachinesLock for multinode-812000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:34:44.309755    7714 start.go:364] duration metric: took 413.75µs to acquireMachinesLock for "multinode-812000"
	I0617 04:34:44.309914    7714 start.go:93] Provisioning new machine with config: &{Name:multinode-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:34:44.310168    7714 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:34:44.320653    7714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:34:44.370885    7714 start.go:159] libmachine.API.Create for "multinode-812000" (driver="qemu2")
	I0617 04:34:44.370940    7714 client.go:168] LocalClient.Create starting
	I0617 04:34:44.371125    7714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:34:44.371204    7714 main.go:141] libmachine: Decoding PEM data...
	I0617 04:34:44.371222    7714 main.go:141] libmachine: Parsing certificate...
	I0617 04:34:44.371288    7714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:34:44.371332    7714 main.go:141] libmachine: Decoding PEM data...
	I0617 04:34:44.371346    7714 main.go:141] libmachine: Parsing certificate...
	I0617 04:34:44.371955    7714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:34:44.527092    7714 main.go:141] libmachine: Creating SSH key...
	I0617 04:34:44.637117    7714 main.go:141] libmachine: Creating Disk image...
	I0617 04:34:44.637122    7714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:34:44.637284    7714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:34:44.649906    7714 main.go:141] libmachine: STDOUT: 
	I0617 04:34:44.649930    7714 main.go:141] libmachine: STDERR: 
	I0617 04:34:44.649978    7714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2 +20000M
	I0617 04:34:44.660719    7714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:34:44.660735    7714 main.go:141] libmachine: STDERR: 
	I0617 04:34:44.660745    7714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:34:44.660760    7714 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:34:44.660796    7714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7f:f5:53:68:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:34:44.662499    7714 main.go:141] libmachine: STDOUT: 
	I0617 04:34:44.662512    7714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:34:44.662525    7714 client.go:171] duration metric: took 291.588ms to LocalClient.Create
	I0617 04:34:46.664724    7714 start.go:128] duration metric: took 2.354491542s to createHost
	I0617 04:34:46.664828    7714 start.go:83] releasing machines lock for "multinode-812000", held for 2.355081834s
	W0617 04:34:46.665144    7714 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:34:46.680841    7714 out.go:177] 
	W0617 04:34:46.683759    7714 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:34:46.683806    7714 out.go:239] * 
	* 
	W0617 04:34:46.686355    7714 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:34:46.695795    7714 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-812000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (67.097084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (91.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.9805ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-812000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- rollout status deployment/busybox: exit status 1 (56.62725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.843709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.460042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.948542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.717333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.237875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.611667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.594ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.720083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.07275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.642667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.70125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.906416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.795791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.586292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (30.134125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (91.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-812000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.8275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (30.017166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-812000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-812000 -v 3 --alsologtostderr: exit status 83 (40.319167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-812000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-812000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:18.065298    7803 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:18.065496    7803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.065503    7803 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:18.065505    7803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.065633    7803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:18.065859    7803 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:18.066031    7803 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:18.069453    7803 out.go:177] * The control-plane node multinode-812000 host is not running: state=Stopped
	I0617 04:36:18.072415    7803 out.go:177]   To start a cluster, run: "minikube start -p multinode-812000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-812000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (29.440916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-812000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-812000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.384708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-812000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-812000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-812000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (31.338083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-812000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-812000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-812000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"multinode-812000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (29.994833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status --output json --alsologtostderr: exit status 7 (30.097209ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-812000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:18.292882    7816 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:18.293042    7816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.293045    7816 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:18.293047    7816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.293160    7816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:18.293277    7816 out.go:298] Setting JSON to true
	I0617 04:36:18.293287    7816 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:18.293339    7816 notify.go:220] Checking for updates...
	I0617 04:36:18.293470    7816 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:18.293477    7816 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:18.293691    7816 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:18.293695    7816 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:18.293697    7816 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-812000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (30.439667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 node stop m03: exit status 85 (45.318875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-812000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status: exit status 7 (29.162125ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr: exit status 7 (30.237084ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:18.428919    7824 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:18.429086    7824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.429089    7824 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:18.429091    7824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.429208    7824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:18.429321    7824 out.go:298] Setting JSON to false
	I0617 04:36:18.429331    7824 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:18.429395    7824 notify.go:220] Checking for updates...
	I0617 04:36:18.429533    7824 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:18.429539    7824 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:18.429763    7824 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:18.429767    7824 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:18.429769    7824 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr": multinode-812000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (29.480042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.426083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:18.488074    7828 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:18.488488    7828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.488492    7828 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:18.488494    7828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.488946    7828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:18.489196    7828 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:18.489573    7828 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:18.494012    7828 out.go:177] 
	W0617 04:36:18.498056    7828 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0617 04:36:18.498060    7828 out.go:239] * 
	* 
	W0617 04:36:18.500025    7828 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:36:18.504129    7828 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0617 04:36:18.488074    7828 out.go:291] Setting OutFile to fd 1 ...
I0617 04:36:18.488488    7828 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:36:18.488492    7828 out.go:304] Setting ErrFile to fd 2...
I0617 04:36:18.488494    7828 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 04:36:18.488946    7828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
I0617 04:36:18.489196    7828 mustload.go:65] Loading cluster: multinode-812000
I0617 04:36:18.489573    7828 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0617 04:36:18.494012    7828 out.go:177] 
W0617 04:36:18.498056    7828 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0617 04:36:18.498060    7828 out.go:239] * 
* 
W0617 04:36:18.500025    7828 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0617 04:36:18.504129    7828 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-812000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (30.62875ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:18.538018    7830 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:18.538155    7830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.538158    7830 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:18.538161    7830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:18.538293    7830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:18.538428    7830 out.go:298] Setting JSON to false
	I0617 04:36:18.538437    7830 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:18.538487    7830 notify.go:220] Checking for updates...
	I0617 04:36:18.538638    7830 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:18.538644    7830 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:18.538866    7830 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:18.538869    7830 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:18.538871    7830 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (73.267333ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:19.157716    7832 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:19.157916    7832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:19.157920    7832 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:19.157923    7832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:19.158100    7832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:19.158252    7832 out.go:298] Setting JSON to false
	I0617 04:36:19.158265    7832 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:19.158302    7832 notify.go:220] Checking for updates...
	I0617 04:36:19.158483    7832 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:19.158492    7832 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:19.158780    7832 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:19.158785    7832 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:19.158788    7832 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (72.906708ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:20.236234    7836 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:20.236413    7836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:20.236418    7836 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:20.236420    7836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:20.236583    7836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:20.236737    7836 out.go:298] Setting JSON to false
	I0617 04:36:20.236751    7836 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:20.236787    7836 notify.go:220] Checking for updates...
	I0617 04:36:20.236994    7836 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:20.237003    7836 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:20.237286    7836 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:20.237291    7836 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:20.237294    7836 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (73.029583ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:22.314515    7838 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:22.314693    7838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:22.314697    7838 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:22.314701    7838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:22.314888    7838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:22.315033    7838 out.go:298] Setting JSON to false
	I0617 04:36:22.315044    7838 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:22.315082    7838 notify.go:220] Checking for updates...
	I0617 04:36:22.315320    7838 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:22.315328    7838 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:22.315582    7838 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:22.315586    7838 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:22.315589    7838 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (72.39225ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:26.402932    7840 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:26.403121    7840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:26.403125    7840 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:26.403128    7840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:26.403272    7840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:26.403433    7840 out.go:298] Setting JSON to false
	I0617 04:36:26.403446    7840 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:26.403480    7840 notify.go:220] Checking for updates...
	I0617 04:36:26.403707    7840 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:26.403716    7840 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:26.403974    7840 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:26.403979    7840 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:26.403982    7840 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (74.175292ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:29.983865    7842 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:29.984086    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:29.984090    7842 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:29.984093    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:29.984271    7842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:29.984434    7842 out.go:298] Setting JSON to false
	I0617 04:36:29.984448    7842 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:29.984493    7842 notify.go:220] Checking for updates...
	I0617 04:36:29.984700    7842 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:29.984709    7842 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:29.985001    7842 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:29.985006    7842 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:29.985009    7842 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (75.744958ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:34.961123    7845 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:34.961347    7845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:34.961351    7845 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:34.961354    7845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:34.961544    7845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:34.961716    7845 out.go:298] Setting JSON to false
	I0617 04:36:34.961729    7845 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:34.961775    7845 notify.go:220] Checking for updates...
	I0617 04:36:34.961990    7845 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:34.961999    7845 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:34.962267    7845 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:34.962271    7845 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:34.962274    7845 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (73.887458ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:40.756116    7851 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:40.756358    7851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:40.756362    7851 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:40.756365    7851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:40.756546    7851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:40.756694    7851 out.go:298] Setting JSON to false
	I0617 04:36:40.756710    7851 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:40.756745    7851 notify.go:220] Checking for updates...
	I0617 04:36:40.756984    7851 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:40.756993    7851 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:40.757278    7851 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:40.757283    7851 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:40.757286    7851 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr: exit status 7 (75.206333ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:36:58.430393    7855 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:36:58.430630    7855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:58.430635    7855 out.go:304] Setting ErrFile to fd 2...
	I0617 04:36:58.430638    7855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:36:58.430851    7855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:36:58.431026    7855 out.go:298] Setting JSON to false
	I0617 04:36:58.431049    7855 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:36:58.431095    7855 notify.go:220] Checking for updates...
	I0617 04:36:58.431304    7855 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:36:58.431315    7855 status.go:255] checking status of multinode-812000 ...
	I0617 04:36:58.431598    7855 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:36:58.431603    7855 status.go:343] host is not running, skipping remaining checks
	I0617 04:36:58.431606    7855 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-812000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (33.923375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (40.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-812000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-812000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-812000: (3.353123125s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-812000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-812000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.234884458s)

                                                
                                                
-- stdout --
	* [multinode-812000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-812000" primary control-plane node in "multinode-812000" cluster
	* Restarting existing qemu2 VM for "multinode-812000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-812000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:37:01.913137    7881 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:37:01.913318    7881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:01.913322    7881 out.go:304] Setting ErrFile to fd 2...
	I0617 04:37:01.913325    7881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:01.913529    7881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:37:01.914794    7881 out.go:298] Setting JSON to false
	I0617 04:37:01.934284    7881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3991,"bootTime":1718620230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:37:01.934347    7881 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:37:01.938815    7881 out.go:177] * [multinode-812000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:37:01.953813    7881 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:37:01.953838    7881 notify.go:220] Checking for updates...
	I0617 04:37:01.960761    7881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:37:01.963757    7881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:37:01.966740    7881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:37:01.969740    7881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:37:01.972652    7881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:37:01.976035    7881 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:37:01.976094    7881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:37:01.980700    7881 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:37:01.987759    7881 start.go:297] selected driver: qemu2
	I0617 04:37:01.987765    7881 start.go:901] validating driver "qemu2" against &{Name:multinode-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:37:01.987819    7881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:37:01.990288    7881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:37:01.990338    7881 cni.go:84] Creating CNI manager for ""
	I0617 04:37:01.990344    7881 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 04:37:01.990401    7881 start.go:340] cluster config:
	{Name:multinode-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-812000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:37:01.995180    7881 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:02.003761    7881 out.go:177] * Starting "multinode-812000" primary control-plane node in "multinode-812000" cluster
	I0617 04:37:02.007726    7881 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:37:02.007748    7881 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:37:02.007763    7881 cache.go:56] Caching tarball of preloaded images
	I0617 04:37:02.007828    7881 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:37:02.007840    7881 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:37:02.007901    7881 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/multinode-812000/config.json ...
	I0617 04:37:02.008392    7881 start.go:360] acquireMachinesLock for multinode-812000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:37:02.008438    7881 start.go:364] duration metric: took 39.209µs to acquireMachinesLock for "multinode-812000"
	I0617 04:37:02.008447    7881 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:37:02.008454    7881 fix.go:54] fixHost starting: 
	I0617 04:37:02.008600    7881 fix.go:112] recreateIfNeeded on multinode-812000: state=Stopped err=<nil>
	W0617 04:37:02.008609    7881 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:37:02.016697    7881 out.go:177] * Restarting existing qemu2 VM for "multinode-812000" ...
	I0617 04:37:02.020737    7881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7f:f5:53:68:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:37:02.022888    7881 main.go:141] libmachine: STDOUT: 
	I0617 04:37:02.022907    7881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:37:02.022940    7881 fix.go:56] duration metric: took 14.484917ms for fixHost
	I0617 04:37:02.022946    7881 start.go:83] releasing machines lock for "multinode-812000", held for 14.5035ms
	W0617 04:37:02.022952    7881 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:37:02.022985    7881 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:37:02.022990    7881 start.go:728] Will try again in 5 seconds ...
	I0617 04:37:07.025103    7881 start.go:360] acquireMachinesLock for multinode-812000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:37:07.025563    7881 start.go:364] duration metric: took 317.667µs to acquireMachinesLock for "multinode-812000"
	I0617 04:37:07.025675    7881 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:37:07.025701    7881 fix.go:54] fixHost starting: 
	I0617 04:37:07.026467    7881 fix.go:112] recreateIfNeeded on multinode-812000: state=Stopped err=<nil>
	W0617 04:37:07.026497    7881 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:37:07.035614    7881 out.go:177] * Restarting existing qemu2 VM for "multinode-812000" ...
	I0617 04:37:07.039799    7881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7f:f5:53:68:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:37:07.049732    7881 main.go:141] libmachine: STDOUT: 
	I0617 04:37:07.049792    7881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:37:07.049908    7881 fix.go:56] duration metric: took 24.208083ms for fixHost
	I0617 04:37:07.049931    7881 start.go:83] releasing machines lock for "multinode-812000", held for 24.339875ms
	W0617 04:37:07.050097    7881 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-812000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-812000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:37:07.057675    7881 out.go:177] 
	W0617 04:37:07.061845    7881 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:37:07.061881    7881 out.go:239] * 
	* 
	W0617 04:37:07.064883    7881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:37:07.071675    7881 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-812000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-812000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (33.349584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 node delete m03: exit status 83 (40.910958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-812000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-812000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-812000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr: exit status 7 (29.535542ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:37:07.256323    7896 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:37:07.256486    7896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:07.256494    7896 out.go:304] Setting ErrFile to fd 2...
	I0617 04:37:07.256496    7896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:07.256681    7896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:37:07.257055    7896 out.go:298] Setting JSON to false
	I0617 04:37:07.257068    7896 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:37:07.257096    7896 notify.go:220] Checking for updates...
	I0617 04:37:07.257254    7896 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:37:07.257261    7896 status.go:255] checking status of multinode-812000 ...
	I0617 04:37:07.257457    7896 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:37:07.257461    7896 status.go:343] host is not running, skipping remaining checks
	I0617 04:37:07.257463    7896 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (29.715084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-812000 stop: (3.503487833s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status: exit status 7 (67.654417ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr: exit status 7 (32.753917ms)

                                                
                                                
-- stdout --
	multinode-812000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:37:10.890877    7922 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:37:10.890996    7922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:10.890999    7922 out.go:304] Setting ErrFile to fd 2...
	I0617 04:37:10.891001    7922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:10.891143    7922 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:37:10.891256    7922 out.go:298] Setting JSON to false
	I0617 04:37:10.891267    7922 mustload.go:65] Loading cluster: multinode-812000
	I0617 04:37:10.891333    7922 notify.go:220] Checking for updates...
	I0617 04:37:10.891442    7922 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:37:10.891450    7922 status.go:255] checking status of multinode-812000 ...
	I0617 04:37:10.891661    7922 status.go:330] multinode-812000 host status = "Stopped" (err=<nil>)
	I0617 04:37:10.891664    7922 status.go:343] host is not running, skipping remaining checks
	I0617 04:37:10.891666    7922 status.go:257] multinode-812000 status: &{Name:multinode-812000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr": multinode-812000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-812000 status --alsologtostderr": multinode-812000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (29.889375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-812000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-812000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18716925s)

                                                
                                                
-- stdout --
	* [multinode-812000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-812000" primary control-plane node in "multinode-812000" cluster
	* Restarting existing qemu2 VM for "multinode-812000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-812000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:37:10.950308    7926 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:37:10.950429    7926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:10.950432    7926 out.go:304] Setting ErrFile to fd 2...
	I0617 04:37:10.950434    7926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:10.950578    7926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:37:10.951585    7926 out.go:298] Setting JSON to false
	I0617 04:37:10.967760    7926 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4000,"bootTime":1718620230,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:37:10.967825    7926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:37:10.971618    7926 out.go:177] * [multinode-812000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:37:10.979617    7926 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:37:10.983582    7926 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:37:10.979664    7926 notify.go:220] Checking for updates...
	I0617 04:37:10.991417    7926 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:37:10.994611    7926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:37:10.999853    7926 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:37:11.002597    7926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:37:11.005883    7926 config.go:182] Loaded profile config "multinode-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:37:11.006150    7926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:37:11.010593    7926 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:37:11.017575    7926 start.go:297] selected driver: qemu2
	I0617 04:37:11.017581    7926 start.go:901] validating driver "qemu2" against &{Name:multinode-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:37:11.017655    7926 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:37:11.019827    7926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:37:11.019851    7926 cni.go:84] Creating CNI manager for ""
	I0617 04:37:11.019857    7926 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0617 04:37:11.019897    7926 start.go:340] cluster config:
	{Name:multinode-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-812000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:37:11.024170    7926 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:11.030566    7926 out.go:177] * Starting "multinode-812000" primary control-plane node in "multinode-812000" cluster
	I0617 04:37:11.034601    7926 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:37:11.034618    7926 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:37:11.034628    7926 cache.go:56] Caching tarball of preloaded images
	I0617 04:37:11.034688    7926 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:37:11.034696    7926 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:37:11.034786    7926 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/multinode-812000/config.json ...
	I0617 04:37:11.035256    7926 start.go:360] acquireMachinesLock for multinode-812000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:37:11.035292    7926 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "multinode-812000"
	I0617 04:37:11.035301    7926 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:37:11.035308    7926 fix.go:54] fixHost starting: 
	I0617 04:37:11.035428    7926 fix.go:112] recreateIfNeeded on multinode-812000: state=Stopped err=<nil>
	W0617 04:37:11.035437    7926 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:37:11.038557    7926 out.go:177] * Restarting existing qemu2 VM for "multinode-812000" ...
	I0617 04:37:11.046626    7926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7f:f5:53:68:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:37:11.048586    7926 main.go:141] libmachine: STDOUT: 
	I0617 04:37:11.048612    7926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:37:11.048650    7926 fix.go:56] duration metric: took 13.339959ms for fixHost
	I0617 04:37:11.048655    7926 start.go:83] releasing machines lock for "multinode-812000", held for 13.35875ms
	W0617 04:37:11.048663    7926 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:37:11.048707    7926 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:37:11.048712    7926 start.go:728] Will try again in 5 seconds ...
	I0617 04:37:16.050837    7926 start.go:360] acquireMachinesLock for multinode-812000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:37:16.051205    7926 start.go:364] duration metric: took 285.958µs to acquireMachinesLock for "multinode-812000"
	I0617 04:37:16.051346    7926 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:37:16.051367    7926 fix.go:54] fixHost starting: 
	I0617 04:37:16.052096    7926 fix.go:112] recreateIfNeeded on multinode-812000: state=Stopped err=<nil>
	W0617 04:37:16.052128    7926 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:37:16.059684    7926 out.go:177] * Restarting existing qemu2 VM for "multinode-812000" ...
	I0617 04:37:16.063621    7926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:7f:f5:53:68:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/multinode-812000/disk.qcow2
	I0617 04:37:16.073440    7926 main.go:141] libmachine: STDOUT: 
	I0617 04:37:16.073508    7926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:37:16.073581    7926 fix.go:56] duration metric: took 22.21525ms for fixHost
	I0617 04:37:16.073605    7926 start.go:83] releasing machines lock for "multinode-812000", held for 22.374209ms
	W0617 04:37:16.073765    7926 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-812000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-812000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:37:16.081709    7926 out.go:177] 
	W0617 04:37:16.085691    7926 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:37:16.085713    7926 out.go:239] * 
	* 
	W0617 04:37:16.088164    7926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:37:16.096602    7926 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-812000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (67.599792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-812000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-812000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-812000-m01 --driver=qemu2 : exit status 80 (9.909148333s)

                                                
                                                
-- stdout --
	* [multinode-812000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-812000-m01" primary control-plane node in "multinode-812000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-812000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-812000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-812000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-812000-m02 --driver=qemu2 : exit status 80 (9.977385625s)

                                                
                                                
-- stdout --
	* [multinode-812000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-812000-m02" primary control-plane node in "multinode-812000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-812000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-812000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-812000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-812000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-812000: exit status 83 (79.278417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-812000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-812000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-812000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-812000 -n multinode-812000: exit status 7 (30.84175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.14s)

                                                
                                    
x
+
TestPreload (10.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-800000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-800000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.932169666s)

                                                
                                                
-- stdout --
	* [test-preload-800000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-800000" primary control-plane node in "test-preload-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:37:36.480491    7982 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:37:36.480615    7982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:36.480618    7982 out.go:304] Setting ErrFile to fd 2...
	I0617 04:37:36.480621    7982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:37:36.480747    7982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:37:36.481769    7982 out.go:298] Setting JSON to false
	I0617 04:37:36.497945    7982 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4026,"bootTime":1718620230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:37:36.498016    7982 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:37:36.504651    7982 out.go:177] * [test-preload-800000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:37:36.512811    7982 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:37:36.516816    7982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:37:36.512855    7982 notify.go:220] Checking for updates...
	I0617 04:37:36.519814    7982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:37:36.522790    7982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:37:36.525819    7982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:37:36.528797    7982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:37:36.532247    7982 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:37:36.532293    7982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:37:36.535777    7982 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:37:36.542806    7982 start.go:297] selected driver: qemu2
	I0617 04:37:36.542811    7982 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:37:36.542817    7982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:37:36.545014    7982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:37:36.546616    7982 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:37:36.549820    7982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:37:36.549858    7982 cni.go:84] Creating CNI manager for ""
	I0617 04:37:36.549867    7982 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:37:36.549874    7982 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:37:36.549902    7982 start.go:340] cluster config:
	{Name:test-preload-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:37:36.554287    7982 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.561807    7982 out.go:177] * Starting "test-preload-800000" primary control-plane node in "test-preload-800000" cluster
	I0617 04:37:36.565767    7982 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0617 04:37:36.565864    7982 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/test-preload-800000/config.json ...
	I0617 04:37:36.565877    7982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/test-preload-800000/config.json: {Name:mk1be6e4f3d514d0df0505bd64de203d9ae90840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:37:36.565905    7982 cache.go:107] acquiring lock: {Name:mk659eb9e8657f0d926428caab9cd1d5e2e37549 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.565922    7982 cache.go:107] acquiring lock: {Name:mke467edca0929840946f2c3402c681175ac3d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.565942    7982 cache.go:107] acquiring lock: {Name:mk3fa2759037c1bc2e1578b3df16cfdf26e3a514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.566009    7982 cache.go:107] acquiring lock: {Name:mk37aa0b450b206768438531f241f7c57780846b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.566159    7982 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0617 04:37:36.566175    7982 cache.go:107] acquiring lock: {Name:mke2892142ec2dc2492caa8e73b9e9a8ea9116d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.566206    7982 cache.go:107] acquiring lock: {Name:mk52c71cb9c8ba82110302443c0ccbb75b7d159a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.566209    7982 cache.go:107] acquiring lock: {Name:mk81cf58de9e89e1723d4ccd2158c17fc5bcea55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.566189    7982 cache.go:107] acquiring lock: {Name:mk2dfabd1fd5f73f01a83f0339816b460eaa1d9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:37:36.566235    7982 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0617 04:37:36.566201    7982 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:37:36.566321    7982 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0617 04:37:36.566460    7982 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0617 04:37:36.566470    7982 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:37:36.566488    7982 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:37:36.566499    7982 start.go:360] acquireMachinesLock for test-preload-800000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:37:36.566544    7982 start.go:364] duration metric: took 39.709µs to acquireMachinesLock for "test-preload-800000"
	I0617 04:37:36.566572    7982 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0617 04:37:36.566558    7982 start.go:93] Provisioning new machine with config: &{Name:test-preload-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:37:36.566596    7982 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:37:36.569822    7982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:37:36.580255    7982 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0617 04:37:36.580943    7982 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0617 04:37:36.580965    7982 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0617 04:37:36.580969    7982 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:37:36.581077    7982 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0617 04:37:36.581082    7982 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:37:36.581135    7982 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:37:36.581167    7982 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0617 04:37:36.588492    7982 start.go:159] libmachine.API.Create for "test-preload-800000" (driver="qemu2")
	I0617 04:37:36.588522    7982 client.go:168] LocalClient.Create starting
	I0617 04:37:36.588630    7982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:37:36.588662    7982 main.go:141] libmachine: Decoding PEM data...
	I0617 04:37:36.588675    7982 main.go:141] libmachine: Parsing certificate...
	I0617 04:37:36.588724    7982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:37:36.588749    7982 main.go:141] libmachine: Decoding PEM data...
	I0617 04:37:36.588757    7982 main.go:141] libmachine: Parsing certificate...
	I0617 04:37:36.589135    7982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:37:36.740761    7982 main.go:141] libmachine: Creating SSH key...
	I0617 04:37:36.793635    7982 main.go:141] libmachine: Creating Disk image...
	I0617 04:37:36.793655    7982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:37:36.793842    7982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2
	I0617 04:37:36.807026    7982 main.go:141] libmachine: STDOUT: 
	I0617 04:37:36.807095    7982 main.go:141] libmachine: STDERR: 
	I0617 04:37:36.807144    7982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2 +20000M
	I0617 04:37:36.819198    7982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:37:36.819239    7982 main.go:141] libmachine: STDERR: 
	I0617 04:37:36.819258    7982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2
	I0617 04:37:36.819263    7982 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:37:36.819307    7982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c0:77:3d:ca:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2
	I0617 04:37:36.821790    7982 main.go:141] libmachine: STDOUT: 
	I0617 04:37:36.821817    7982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:37:36.821838    7982 client.go:171] duration metric: took 233.311166ms to LocalClient.Create
	I0617 04:37:37.484318    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0617 04:37:37.515795    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0617 04:37:37.529549    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0617 04:37:37.535587    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0617 04:37:37.669998    7982 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0617 04:37:37.670115    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0617 04:37:37.693572    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0617 04:37:37.696222    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0617 04:37:37.794419    7982 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0617 04:37:37.794510    7982 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 04:37:37.901049    7982 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0617 04:37:37.901105    7982 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.334934208s
	I0617 04:37:37.901152    7982 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0617 04:37:38.480538    7982 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0617 04:37:38.480616    7982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.914728166s
	I0617 04:37:38.480651    7982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0617 04:37:38.822030    7982 start.go:128] duration metric: took 2.255423167s to createHost
	I0617 04:37:38.822087    7982 start.go:83] releasing machines lock for "test-preload-800000", held for 2.255555292s
	W0617 04:37:38.822153    7982 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:37:38.840688    7982 out.go:177] * Deleting "test-preload-800000" in qemu2 ...
	W0617 04:37:38.876212    7982 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:37:38.876246    7982 start.go:728] Will try again in 5 seconds ...
	I0617 04:37:39.621122    7982 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0617 04:37:39.621191    7982 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.055004833s
	I0617 04:37:39.621221    7982 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0617 04:37:39.918250    7982 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0617 04:37:39.918298    7982 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.352418459s
	I0617 04:37:39.918326    7982 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0617 04:37:41.785383    7982 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0617 04:37:41.785427    7982 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.2194915s
	I0617 04:37:41.785453    7982 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0617 04:37:42.601440    7982 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0617 04:37:42.601532    7982 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.03566225s
	I0617 04:37:42.601557    7982 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0617 04:37:43.265602    7982 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0617 04:37:43.265689    7982 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.699572792s
	I0617 04:37:43.265717    7982 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0617 04:37:43.876557    7982 start.go:360] acquireMachinesLock for test-preload-800000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:37:43.876937    7982 start.go:364] duration metric: took 298.458µs to acquireMachinesLock for "test-preload-800000"
	I0617 04:37:43.877052    7982 start.go:93] Provisioning new machine with config: &{Name:test-preload-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:37:43.877383    7982 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:37:43.891079    7982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:37:43.939838    7982 start.go:159] libmachine.API.Create for "test-preload-800000" (driver="qemu2")
	I0617 04:37:43.939909    7982 client.go:168] LocalClient.Create starting
	I0617 04:37:43.940018    7982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:37:43.940081    7982 main.go:141] libmachine: Decoding PEM data...
	I0617 04:37:43.940096    7982 main.go:141] libmachine: Parsing certificate...
	I0617 04:37:43.940148    7982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:37:43.940193    7982 main.go:141] libmachine: Decoding PEM data...
	I0617 04:37:43.940206    7982 main.go:141] libmachine: Parsing certificate...
	I0617 04:37:43.940713    7982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:37:44.102759    7982 main.go:141] libmachine: Creating SSH key...
	I0617 04:37:44.310262    7982 main.go:141] libmachine: Creating Disk image...
	I0617 04:37:44.310273    7982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:37:44.310459    7982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2
	I0617 04:37:44.323667    7982 main.go:141] libmachine: STDOUT: 
	I0617 04:37:44.323688    7982 main.go:141] libmachine: STDERR: 
	I0617 04:37:44.323738    7982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2 +20000M
	I0617 04:37:44.335006    7982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:37:44.335028    7982 main.go:141] libmachine: STDERR: 
	I0617 04:37:44.335045    7982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2
	I0617 04:37:44.335056    7982 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:37:44.335097    7982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:40:7b:05:aa:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/test-preload-800000/disk.qcow2
	I0617 04:37:44.337013    7982 main.go:141] libmachine: STDOUT: 
	I0617 04:37:44.337028    7982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:37:44.337045    7982 client.go:171] duration metric: took 397.133958ms to LocalClient.Create
	I0617 04:37:46.338768    7982 start.go:128] duration metric: took 2.461331583s to createHost
	I0617 04:37:46.338831    7982 start.go:83] releasing machines lock for "test-preload-800000", held for 2.461891083s
	W0617 04:37:46.339122    7982 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:37:46.349730    7982 out.go:177] 
	W0617 04:37:46.356796    7982 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:37:46.356823    7982 out.go:239] * 
	* 
	W0617 04:37:46.359202    7982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:37:46.369619    7982 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-800000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-06-17 04:37:46.387034 -0700 PDT m=+692.525427167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-800000 -n test-preload-800000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-800000 -n test-preload-800000: exit status 7 (66.922833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-800000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-800000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-800000
--- FAIL: TestPreload (10.11s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-046000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-046000 --memory=2048 --driver=qemu2 : exit status 80 (9.802311875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-046000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-046000" primary control-plane node in "scheduled-stop-046000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-046000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-046000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-046000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-046000" primary control-plane node in "scheduled-stop-046000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-046000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-046000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-06-17 04:37:56.366976 -0700 PDT m=+702.505471751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-046000 -n scheduled-stop-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-046000 -n scheduled-stop-046000: exit status 7 (69.952041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-046000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-046000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-046000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (13.59s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3581019935 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-973000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-973000 --memory=2600 --driver=qemu2 : exit status 80 (9.889777042s)

                                                
                                                
-- stdout --
	* [skaffold-973000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-973000" primary control-plane node in "skaffold-973000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-973000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-973000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-973000" primary control-plane node in "skaffold-973000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-973000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-06-17 04:38:09.964172 -0700 PDT m=+716.102807959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-973000 -n skaffold-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-973000 -n skaffold-973000: exit status 7 (61.891083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-973000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-973000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-973000
--- FAIL: TestSkaffold (13.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (609.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3815001604 start -p running-upgrade-857000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3815001604 start -p running-upgrade-857000 --memory=2200 --vm-driver=qemu2 : (1m10.626133417s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-857000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-857000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.355497417s)

                                                
                                                
-- stdout --
	* [running-upgrade-857000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-857000" primary control-plane node in "running-upgrade-857000" cluster
	* Updating the running qemu2 "running-upgrade-857000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:40:03.413785    8395 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:40:03.413936    8395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:40:03.413939    8395 out.go:304] Setting ErrFile to fd 2...
	I0617 04:40:03.413941    8395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:40:03.414078    8395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:40:03.415116    8395 out.go:298] Setting JSON to false
	I0617 04:40:03.431963    8395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4173,"bootTime":1718620230,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:40:03.432063    8395 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:40:03.436891    8395 out.go:177] * [running-upgrade-857000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:40:03.444873    8395 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:40:03.444916    8395 notify.go:220] Checking for updates...
	I0617 04:40:03.452673    8395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:40:03.460858    8395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:40:03.463878    8395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:40:03.466902    8395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:40:03.470877    8395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:40:03.474182    8395 config.go:182] Loaded profile config "running-upgrade-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:40:03.477824    8395 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0617 04:40:03.480897    8395 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:40:03.483841    8395 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:40:03.490896    8395 start.go:297] selected driver: qemu2
	I0617 04:40:03.490903    8395 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:40:03.490969    8395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:40:03.493350    8395 cni.go:84] Creating CNI manager for ""
	I0617 04:40:03.493367    8395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:40:03.493389    8395 start.go:340] cluster config:
	{Name:running-upgrade-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:40:03.493438    8395 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:40:03.501884    8395 out.go:177] * Starting "running-upgrade-857000" primary control-plane node in "running-upgrade-857000" cluster
	I0617 04:40:03.505733    8395 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0617 04:40:03.505750    8395 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0617 04:40:03.505762    8395 cache.go:56] Caching tarball of preloaded images
	I0617 04:40:03.505833    8395 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:40:03.505838    8395 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0617 04:40:03.505935    8395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/config.json ...
	I0617 04:40:03.506453    8395 start.go:360] acquireMachinesLock for running-upgrade-857000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:40:03.506493    8395 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "running-upgrade-857000"
	I0617 04:40:03.506502    8395 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:40:03.506507    8395 fix.go:54] fixHost starting: 
	I0617 04:40:03.507236    8395 fix.go:112] recreateIfNeeded on running-upgrade-857000: state=Running err=<nil>
	W0617 04:40:03.507243    8395 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:40:03.514840    8395 out.go:177] * Updating the running qemu2 "running-upgrade-857000" VM ...
	I0617 04:40:03.518809    8395 machine.go:94] provisionDockerMachine start ...
	I0617 04:40:03.518849    8395 main.go:141] libmachine: Using SSH client type: native
	I0617 04:40:03.518964    8395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ef2980] 0x102ef51e0 <nil>  [] 0s} localhost 51257 <nil> <nil>}
	I0617 04:40:03.518969    8395 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 04:40:03.594002    8395 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-857000
	
	I0617 04:40:03.594019    8395 buildroot.go:166] provisioning hostname "running-upgrade-857000"
	I0617 04:40:03.594082    8395 main.go:141] libmachine: Using SSH client type: native
	I0617 04:40:03.594189    8395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ef2980] 0x102ef51e0 <nil>  [] 0s} localhost 51257 <nil> <nil>}
	I0617 04:40:03.594194    8395 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-857000 && echo "running-upgrade-857000" | sudo tee /etc/hostname
	I0617 04:40:03.674138    8395 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-857000
	
	I0617 04:40:03.674186    8395 main.go:141] libmachine: Using SSH client type: native
	I0617 04:40:03.674299    8395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ef2980] 0x102ef51e0 <nil>  [] 0s} localhost 51257 <nil> <nil>}
	I0617 04:40:03.674307    8395 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 04:40:03.748263    8395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 04:40:03.748278    8395 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19087-6045/.minikube CaCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19087-6045/.minikube}
	I0617 04:40:03.748286    8395 buildroot.go:174] setting up certificates
	I0617 04:40:03.748290    8395 provision.go:84] configureAuth start
	I0617 04:40:03.748294    8395 provision.go:143] copyHostCerts
	I0617 04:40:03.748380    8395 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem, removing ...
	I0617 04:40:03.748386    8395 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem
	I0617 04:40:03.748521    8395 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem (1679 bytes)
	I0617 04:40:03.748699    8395 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem, removing ...
	I0617 04:40:03.748703    8395 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem
	I0617 04:40:03.748759    8395 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem (1078 bytes)
	I0617 04:40:03.748870    8395 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem, removing ...
	I0617 04:40:03.748873    8395 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem
	I0617 04:40:03.748922    8395 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem (1123 bytes)
	I0617 04:40:03.749008    8395 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-857000 san=[127.0.0.1 localhost minikube running-upgrade-857000]
	I0617 04:40:03.871378    8395 provision.go:177] copyRemoteCerts
	I0617 04:40:03.871421    8395 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 04:40:03.871431    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	I0617 04:40:03.911206    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0617 04:40:03.918125    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0617 04:40:03.924717    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 04:40:03.931938    8395 provision.go:87] duration metric: took 183.643042ms to configureAuth
	I0617 04:40:03.931946    8395 buildroot.go:189] setting minikube options for container-runtime
	I0617 04:40:03.932048    8395 config.go:182] Loaded profile config "running-upgrade-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:40:03.932080    8395 main.go:141] libmachine: Using SSH client type: native
	I0617 04:40:03.932168    8395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ef2980] 0x102ef51e0 <nil>  [] 0s} localhost 51257 <nil> <nil>}
	I0617 04:40:03.932173    8395 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0617 04:40:04.006311    8395 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0617 04:40:04.006320    8395 buildroot.go:70] root file system type: tmpfs
	I0617 04:40:04.006367    8395 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0617 04:40:04.006410    8395 main.go:141] libmachine: Using SSH client type: native
	I0617 04:40:04.006544    8395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ef2980] 0x102ef51e0 <nil>  [] 0s} localhost 51257 <nil> <nil>}
	I0617 04:40:04.006577    8395 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0617 04:40:04.084721    8395 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0617 04:40:04.084767    8395 main.go:141] libmachine: Using SSH client type: native
	I0617 04:40:04.084896    8395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ef2980] 0x102ef51e0 <nil>  [] 0s} localhost 51257 <nil> <nil>}
	I0617 04:40:04.084907    8395 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0617 04:40:04.161137    8395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 04:40:04.161148    8395 machine.go:97] duration metric: took 642.3405ms to provisionDockerMachine
	I0617 04:40:04.161154    8395 start.go:293] postStartSetup for "running-upgrade-857000" (driver="qemu2")
	I0617 04:40:04.161161    8395 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 04:40:04.161210    8395 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 04:40:04.161220    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	I0617 04:40:04.202463    8395 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 04:40:04.203868    8395 info.go:137] Remote host: Buildroot 2021.02.12
	I0617 04:40:04.203875    8395 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19087-6045/.minikube/addons for local assets ...
	I0617 04:40:04.203956    8395 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19087-6045/.minikube/files for local assets ...
	I0617 04:40:04.204075    8395 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem -> 65402.pem in /etc/ssl/certs
	I0617 04:40:04.204208    8395 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 04:40:04.206848    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem --> /etc/ssl/certs/65402.pem (1708 bytes)
	I0617 04:40:04.215422    8395 start.go:296] duration metric: took 54.260375ms for postStartSetup
	I0617 04:40:04.215437    8395 fix.go:56] duration metric: took 708.938042ms for fixHost
	I0617 04:40:04.215489    8395 main.go:141] libmachine: Using SSH client type: native
	I0617 04:40:04.215597    8395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ef2980] 0x102ef51e0 <nil>  [] 0s} localhost 51257 <nil> <nil>}
	I0617 04:40:04.215601    8395 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0617 04:40:04.290550    8395 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718624404.478460347
	
	I0617 04:40:04.290559    8395 fix.go:216] guest clock: 1718624404.478460347
	I0617 04:40:04.290563    8395 fix.go:229] Guest: 2024-06-17 04:40:04.478460347 -0700 PDT Remote: 2024-06-17 04:40:04.215439 -0700 PDT m=+0.820721876 (delta=263.021347ms)
	I0617 04:40:04.290578    8395 fix.go:200] guest clock delta is within tolerance: 263.021347ms
	I0617 04:40:04.290580    8395 start.go:83] releasing machines lock for "running-upgrade-857000", held for 784.091416ms
	I0617 04:40:04.290643    8395 ssh_runner.go:195] Run: cat /version.json
	I0617 04:40:04.290654    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	I0617 04:40:04.290643    8395 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 04:40:04.290681    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	W0617 04:40:04.291344    8395 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51257: connect: connection refused
	I0617 04:40:04.291365    8395 retry.go:31] will retry after 305.07864ms: dial tcp [::1]:51257: connect: connection refused
	W0617 04:40:04.645563    8395 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0617 04:40:04.645678    8395 ssh_runner.go:195] Run: systemctl --version
	I0617 04:40:04.648341    8395 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 04:40:04.650683    8395 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 04:40:04.650726    8395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0617 04:40:04.654320    8395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0617 04:40:04.659503    8395 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 04:40:04.659511    8395 start.go:494] detecting cgroup driver to use...
	I0617 04:40:04.659629    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 04:40:04.665972    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0617 04:40:04.669287    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0617 04:40:04.672816    8395 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0617 04:40:04.672836    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0617 04:40:04.676015    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 04:40:04.679115    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0617 04:40:04.681990    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 04:40:04.685493    8395 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 04:40:04.688995    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0617 04:40:04.691958    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0617 04:40:04.694721    8395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0617 04:40:04.697734    8395 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 04:40:04.700399    8395 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 04:40:04.702979    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:40:04.795090    8395 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0617 04:40:04.806142    8395 start.go:494] detecting cgroup driver to use...
	I0617 04:40:04.806213    8395 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0617 04:40:04.812093    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 04:40:04.816843    8395 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 04:40:04.823625    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 04:40:04.827763    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 04:40:04.831914    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 04:40:04.837373    8395 ssh_runner.go:195] Run: which cri-dockerd
	I0617 04:40:04.838765    8395 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0617 04:40:04.844703    8395 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0617 04:40:04.849851    8395 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0617 04:40:04.942079    8395 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0617 04:40:05.031384    8395 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0617 04:40:05.031438    8395 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0617 04:40:05.036749    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:40:05.135243    8395 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0617 04:40:07.402501    8395 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.267266792s)
	I0617 04:40:07.402569    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0617 04:40:07.406860    8395 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0617 04:40:07.412724    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0617 04:40:07.417798    8395 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0617 04:40:07.501714    8395 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0617 04:40:07.584117    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:40:07.664741    8395 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0617 04:40:07.671084    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0617 04:40:07.676262    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:40:07.747446    8395 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0617 04:40:07.788970    8395 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0617 04:40:07.789052    8395 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0617 04:40:07.791119    8395 start.go:562] Will wait 60s for crictl version
	I0617 04:40:07.791168    8395 ssh_runner.go:195] Run: which crictl
	I0617 04:40:07.792602    8395 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 04:40:07.809521    8395 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0617 04:40:07.809578    8395 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0617 04:40:07.821870    8395 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0617 04:40:07.850900    8395 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0617 04:40:07.851014    8395 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0617 04:40:07.852337    8395 kubeadm.go:877] updating cluster {Name:running-upgrade-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0617 04:40:07.852380    8395 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0617 04:40:07.852419    8395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0617 04:40:07.863030    8395 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0617 04:40:07.863037    8395 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0617 04:40:07.863087    8395 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0617 04:40:07.866371    8395 ssh_runner.go:195] Run: which lz4
	I0617 04:40:07.867669    8395 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0617 04:40:07.868918    8395 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 04:40:07.868930    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0617 04:40:08.573106    8395 docker.go:649] duration metric: took 705.474625ms to copy over tarball
	I0617 04:40:08.573166    8395 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 04:40:09.799706    8395 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.226539s)
	I0617 04:40:09.799721    8395 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 04:40:09.815354    8395 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0617 04:40:09.818459    8395 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0617 04:40:09.823403    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:40:09.903878    8395 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0617 04:40:11.241110    8395 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.337230292s)
	I0617 04:40:11.241201    8395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0617 04:40:11.253544    8395 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0617 04:40:11.253554    8395 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0617 04:40:11.253559    8395 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 04:40:11.259617    8395 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:40:11.259623    8395 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:40:11.259686    8395 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0617 04:40:11.259714    8395 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:40:11.259748    8395 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:40:11.259766    8395 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:40:11.259832    8395 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:40:11.260322    8395 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:40:11.267548    8395 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:40:11.267608    8395 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:40:11.267624    8395 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:40:11.267737    8395 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:40:11.268398    8395 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:40:11.268453    8395 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0617 04:40:11.268503    8395 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:40:11.268561    8395 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	W0617 04:40:12.218343    8395 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0617 04:40:12.218865    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	W0617 04:40:12.219370    8395 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0617 04:40:12.219521    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:40:12.247964    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:40:12.260822    8395 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0617 04:40:12.260833    8395 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0617 04:40:12.260852    8395 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:40:12.260852    8395 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:40:12.260920    8395 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:40:12.260933    8395 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:40:12.279626    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:40:12.282431    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:40:12.282939    8395 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0617 04:40:12.282957    8395 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:40:12.282979    8395 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:40:12.287822    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0617 04:40:12.314381    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:40:12.336744    8395 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0617 04:40:13.527172    8395 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/coredns/coredns:v1.8.6: (1.266229958s)
	I0617 04:40:13.527216    8395 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1: (1.247572917s)
	I0617 04:40:13.527252    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0617 04:40:13.527265    8395 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0617 04:40:13.527312    8395 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:40:13.527389    8395 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1: (1.244948209s)
	I0617 04:40:13.527415    8395 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0617 04:40:13.527435    8395 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:40:13.527439    8395 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:40:13.527500    8395 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:40:13.527543    8395 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/kube-proxy:v1.24.1: (1.24456725s)
	I0617 04:40:13.527561    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0617 04:40:13.527611    8395 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0: (1.23978125s)
	I0617 04:40:13.527625    8395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0617 04:40:13.527641    8395 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0617 04:40:13.527684    8395 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:40:13.527704    8395 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1: (1.213319167s)
	I0617 04:40:13.527737    8395 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0617 04:40:13.527742    8395 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0617 04:40:13.527758    8395 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:40:13.527763    8395 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7: (1.191010958s)
	I0617 04:40:13.527812    8395 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0617 04:40:13.527816    8395 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:40:13.527842    8395 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0617 04:40:13.527902    8395 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0617 04:40:13.528117    8395 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.2671855s)
	I0617 04:40:13.528129    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 04:40:13.528312    8395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0617 04:40:13.586690    8395 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0617 04:40:13.586715    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0617 04:40:13.586725    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0617 04:40:13.586954    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0617 04:40:13.597707    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0617 04:40:13.597736    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0617 04:40:13.597789    8395 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0617 04:40:13.597816    8395 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0617 04:40:13.597833    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0617 04:40:13.597852    8395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0617 04:40:13.600507    8395 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0617 04:40:13.600528    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0617 04:40:13.626004    8395 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0617 04:40:13.626021    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0617 04:40:13.683916    8395 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0617 04:40:13.683942    8395 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 04:40:13.683949    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0617 04:40:13.915158    8395 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 04:40:13.915181    8395 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0617 04:40:13.915188    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0617 04:40:13.954487    8395 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0617 04:40:13.954527    8395 cache_images.go:92] duration metric: took 2.700987458s to LoadCachedImages
	W0617 04:40:13.954581    8395 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0617 04:40:13.954590    8395 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0617 04:40:13.954659    8395 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 04:40:13.954748    8395 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0617 04:40:13.968474    8395 cni.go:84] Creating CNI manager for ""
	I0617 04:40:13.968487    8395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:40:13.968499    8395 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 04:40:13.968508    8395 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-857000 NodeName:running-upgrade-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 04:40:13.968578    8395 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-857000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 04:40:13.968632    8395 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0617 04:40:13.971611    8395 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 04:40:13.971637    8395 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 04:40:13.974255    8395 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0617 04:40:13.979660    8395 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 04:40:13.984482    8395 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0617 04:40:13.989786    8395 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0617 04:40:13.991222    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:40:14.071655    8395 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 04:40:14.076574    8395 certs.go:68] Setting up /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000 for IP: 10.0.2.15
	I0617 04:40:14.076583    8395 certs.go:194] generating shared ca certs ...
	I0617 04:40:14.076592    8395 certs.go:226] acquiring lock for ca certs: {Name:mk71e2ea16ce0c468e7dfee6f005765117fbc8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:40:14.076834    8395 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.key
	I0617 04:40:14.076882    8395 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.key
	I0617 04:40:14.076888    8395 certs.go:256] generating profile certs ...
	I0617 04:40:14.076946    8395 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/client.key
	I0617 04:40:14.076956    8395 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.key.5fef17ab
	I0617 04:40:14.076965    8395 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.crt.5fef17ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0617 04:40:14.149825    8395 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.crt.5fef17ab ...
	I0617 04:40:14.149830    8395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.crt.5fef17ab: {Name:mkff21268830180982accc06b5ecaeaebab4cb73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:40:14.150054    8395 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.key.5fef17ab ...
	I0617 04:40:14.150058    8395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.key.5fef17ab: {Name:mk40f6d09d280c7f5aea4445c36faa1c88b7929c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:40:14.150185    8395 certs.go:381] copying /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.crt.5fef17ab -> /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.crt
	I0617 04:40:14.150369    8395 certs.go:385] copying /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.key.5fef17ab -> /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.key
	I0617 04:40:14.150561    8395 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/proxy-client.key
	I0617 04:40:14.150699    8395 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540.pem (1338 bytes)
	W0617 04:40:14.150729    8395 certs.go:480] ignoring /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540_empty.pem, impossibly tiny 0 bytes
	I0617 04:40:14.150734    8395 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 04:40:14.150753    8395 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem (1078 bytes)
	I0617 04:40:14.150771    8395 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem (1123 bytes)
	I0617 04:40:14.150787    8395 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem (1679 bytes)
	I0617 04:40:14.150822    8395 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem (1708 bytes)
	I0617 04:40:14.151149    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 04:40:14.159170    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0617 04:40:14.166773    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 04:40:14.174159    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0617 04:40:14.181514    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 04:40:14.188050    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 04:40:14.195048    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 04:40:14.202492    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 04:40:14.210082    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540.pem --> /usr/share/ca-certificates/6540.pem (1338 bytes)
	I0617 04:40:14.217075    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem --> /usr/share/ca-certificates/65402.pem (1708 bytes)
	I0617 04:40:14.225070    8395 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 04:40:14.231567    8395 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 04:40:14.236599    8395 ssh_runner.go:195] Run: openssl version
	I0617 04:40:14.238442    8395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 04:40:14.241486    8395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:40:14.243041    8395 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:40:14.243063    8395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:40:14.245005    8395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 04:40:14.247849    8395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6540.pem && ln -fs /usr/share/ca-certificates/6540.pem /etc/ssl/certs/6540.pem"
	I0617 04:40:14.251362    8395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6540.pem
	I0617 04:40:14.252822    8395 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 11:27 /usr/share/ca-certificates/6540.pem
	I0617 04:40:14.252842    8395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6540.pem
	I0617 04:40:14.254608    8395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6540.pem /etc/ssl/certs/51391683.0"
	I0617 04:40:14.257390    8395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65402.pem && ln -fs /usr/share/ca-certificates/65402.pem /etc/ssl/certs/65402.pem"
	I0617 04:40:14.260284    8395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65402.pem
	I0617 04:40:14.261919    8395 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 11:27 /usr/share/ca-certificates/65402.pem
	I0617 04:40:14.261940    8395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65402.pem
	I0617 04:40:14.263594    8395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65402.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 04:40:14.268146    8395 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 04:40:14.270285    8395 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 04:40:14.272687    8395 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 04:40:14.275044    8395 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 04:40:14.281737    8395 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 04:40:14.284625    8395 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 04:40:14.287229    8395 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 04:40:14.289442    8395 kubeadm.go:391] StartCluster: {Name:running-upgrade-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:40:14.289520    8395 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0617 04:40:14.300240    8395 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 04:40:14.303858    8395 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 04:40:14.303864    8395 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 04:40:14.303867    8395 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 04:40:14.303890    8395 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 04:40:14.307502    8395 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 04:40:14.307537    8395 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-857000" does not appear in /Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:40:14.307555    8395 kubeconfig.go:62] /Users/jenkins/minikube-integration/19087-6045/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-857000" cluster setting kubeconfig missing "running-upgrade-857000" context setting]
	I0617 04:40:14.307712    8395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/kubeconfig: {Name:mk50fd79b579920a7f11ac34f212a8491ceefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:40:14.308456    8395 kapi.go:59] client config for running-upgrade-857000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104280460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 04:40:14.309292    8395 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 04:40:14.312190    8395 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-857000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0617 04:40:14.312195    8395 kubeadm.go:1154] stopping kube-system containers ...
	I0617 04:40:14.312234    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0617 04:40:14.326337    8395 docker.go:483] Stopping containers: [afbaefbd9f78 ce46ef4d226b 236d3c456912 ac3f9b0c979d 956c3850f73c 4b8a1fb876c6 de7434430f1a 0d3125fffc84 7f9b0db25449 850770a3a8fa f90a4af745b2]
	I0617 04:40:14.326401    8395 ssh_runner.go:195] Run: docker stop afbaefbd9f78 ce46ef4d226b 236d3c456912 ac3f9b0c979d 956c3850f73c 4b8a1fb876c6 de7434430f1a 0d3125fffc84 7f9b0db25449 850770a3a8fa f90a4af745b2
	I0617 04:40:14.337276    8395 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 04:40:14.427386    8395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 04:40:14.430825    8395 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Jun 17 11:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun 17 11:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jun 17 11:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 17 11:39 /etc/kubernetes/scheduler.conf
	
	I0617 04:40:14.430858    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/admin.conf
	I0617 04:40:14.433583    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0617 04:40:14.433605    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 04:40:14.436438    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/kubelet.conf
	I0617 04:40:14.439162    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0617 04:40:14.439187    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 04:40:14.441918    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/controller-manager.conf
	I0617 04:40:14.444925    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0617 04:40:14.444944    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 04:40:14.447958    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/scheduler.conf
	I0617 04:40:14.450710    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0617 04:40:14.450728    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 04:40:14.453182    8395 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 04:40:14.456283    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:40:14.480051    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:40:15.066527    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:40:15.279591    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:40:15.299902    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:40:15.333561    8395 api_server.go:52] waiting for apiserver process to appear ...
	I0617 04:40:15.333633    8395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:40:15.836005    8395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:40:16.335690    8395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:40:16.340137    8395 api_server.go:72] duration metric: took 1.006588667s to wait for apiserver process to appear ...
	I0617 04:40:16.340146    8395 api_server.go:88] waiting for apiserver healthz status ...
	I0617 04:40:16.340154    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:21.342075    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:21.342112    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:26.342633    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:26.342743    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:31.343583    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:31.343669    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:36.344958    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:36.345041    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:41.346580    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:41.346687    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:46.348716    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:46.348827    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:51.350901    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:51.350981    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:40:56.353549    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:40:56.353639    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:01.356245    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:01.356313    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:06.358800    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:06.358887    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:11.359668    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:11.359741    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:16.362231    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:16.362335    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:41:16.375092    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:41:16.375181    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:41:16.385929    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:41:16.385995    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:41:16.396158    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:41:16.396221    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:41:16.406462    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:41:16.406516    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:41:16.416871    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:41:16.416927    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:41:16.428420    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:41:16.428484    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:41:16.438229    8395 logs.go:276] 0 containers: []
	W0617 04:41:16.438239    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:41:16.438298    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:41:16.448859    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:41:16.448875    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:41:16.448880    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:41:16.462584    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:41:16.462594    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:41:16.499747    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:41:16.499756    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:41:16.503980    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:41:16.503987    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:41:16.516469    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:41:16.516479    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:41:16.536510    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:41:16.536522    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:41:16.555405    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:41:16.555418    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:41:16.625633    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:41:16.625646    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:41:16.640005    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:41:16.640018    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:41:16.661009    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:41:16.661018    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:41:16.673393    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:41:16.673407    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:41:16.684801    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:41:16.684811    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:41:16.709570    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:41:16.709578    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:41:16.737244    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:41:16.737256    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:41:16.754585    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:41:16.754596    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:41:16.766050    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:41:16.766062    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:41:16.780895    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:41:16.780908    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:41:19.293231    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:24.295729    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:24.296123    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:41:24.330260    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:41:24.330399    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:41:24.349829    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:41:24.349921    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:41:24.363631    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:41:24.363706    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:41:24.375858    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:41:24.375941    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:41:24.386726    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:41:24.386793    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:41:24.397208    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:41:24.397274    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:41:24.407389    8395 logs.go:276] 0 containers: []
	W0617 04:41:24.407400    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:41:24.407455    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:41:24.418397    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:41:24.418413    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:41:24.418418    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:41:24.443958    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:41:24.443967    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:41:24.469192    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:41:24.469199    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:41:24.505158    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:41:24.505171    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:41:24.540314    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:41:24.540327    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:41:24.551254    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:41:24.551266    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:41:24.565496    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:41:24.565508    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:41:24.582166    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:41:24.582181    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:41:24.594116    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:41:24.594128    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:41:24.608032    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:41:24.608044    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:41:24.626220    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:41:24.626231    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:41:24.640390    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:41:24.640400    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:41:24.655805    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:41:24.655818    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:41:24.666837    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:41:24.666848    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:41:24.686750    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:41:24.686759    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:41:24.698042    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:41:24.698064    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:41:24.702118    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:41:24.702126    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:41:27.221443    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:32.223063    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:32.223590    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:41:32.265885    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:41:32.266022    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:41:32.287556    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:41:32.287671    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:41:32.302215    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:41:32.302308    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:41:32.318515    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:41:32.318591    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:41:32.328506    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:41:32.328572    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:41:32.339323    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:41:32.339385    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:41:32.349368    8395 logs.go:276] 0 containers: []
	W0617 04:41:32.349380    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:41:32.349430    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:41:32.359999    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:41:32.360020    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:41:32.360025    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:41:32.397221    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:41:32.397231    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:41:32.423215    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:41:32.423229    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:41:32.437586    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:41:32.437594    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:41:32.462282    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:41:32.462292    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:41:32.473950    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:41:32.473961    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:41:32.507765    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:41:32.507778    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:41:32.522482    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:41:32.522495    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:41:32.536602    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:41:32.536612    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:41:32.550946    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:41:32.550959    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:41:32.562994    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:41:32.563007    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:41:32.574165    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:41:32.574176    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:41:32.587967    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:41:32.587978    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:41:32.605551    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:41:32.605560    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:41:32.609769    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:41:32.609778    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:41:32.624139    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:41:32.624153    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:41:32.636062    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:41:32.636077    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:41:35.150283    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:40.153067    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:40.153488    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:41:40.187292    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:41:40.187430    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:41:40.208222    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:41:40.208319    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:41:40.222749    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:41:40.222831    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:41:40.234867    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:41:40.234940    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:41:40.245273    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:41:40.245339    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:41:40.260539    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:41:40.260600    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:41:40.271072    8395 logs.go:276] 0 containers: []
	W0617 04:41:40.271082    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:41:40.271131    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:41:40.282781    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:41:40.282798    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:41:40.282804    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:41:40.293909    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:41:40.293921    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:41:40.305163    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:41:40.305174    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:41:40.330901    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:41:40.330909    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:41:40.355835    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:41:40.355845    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:41:40.372730    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:41:40.372742    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:41:40.397398    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:41:40.397411    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:41:40.412176    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:41:40.412189    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:41:40.423788    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:41:40.423801    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:41:40.434675    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:41:40.434685    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:41:40.438787    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:41:40.438796    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:41:40.452234    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:41:40.452245    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:41:40.469747    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:41:40.469758    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:41:40.492559    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:41:40.492569    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:41:40.503999    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:41:40.504010    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:41:40.518796    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:41:40.518809    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:41:40.553893    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:41:40.553903    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:41:43.089450    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:48.092305    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:48.092721    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:41:48.134885    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:41:48.135010    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:41:48.160510    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:41:48.160613    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:41:48.174471    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:41:48.174553    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:41:48.186013    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:41:48.186080    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:41:48.200212    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:41:48.200281    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:41:48.210388    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:41:48.210458    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:41:48.220382    8395 logs.go:276] 0 containers: []
	W0617 04:41:48.220395    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:41:48.220450    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:41:48.231055    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:41:48.231073    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:41:48.231079    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:41:48.245078    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:41:48.245090    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:41:48.256922    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:41:48.256936    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:41:48.282849    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:41:48.282855    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:41:48.317182    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:41:48.317191    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:41:48.342812    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:41:48.342822    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:41:48.354315    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:41:48.354324    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:41:48.365191    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:41:48.365200    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:41:48.379090    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:41:48.379101    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:41:48.403861    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:41:48.403873    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:41:48.415212    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:41:48.418946    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:41:48.435943    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:41:48.435954    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:41:48.471759    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:41:48.471767    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:41:48.475795    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:41:48.475800    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:41:48.490105    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:41:48.490117    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:41:48.501942    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:41:48.501957    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:41:48.516562    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:41:48.516573    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:41:51.029785    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:41:56.031943    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:41:56.032158    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:41:56.052335    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:41:56.052425    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:41:56.070376    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:41:56.070447    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:41:56.081621    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:41:56.081681    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:41:56.092040    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:41:56.092115    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:41:56.101895    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:41:56.101965    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:41:56.112799    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:41:56.112859    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:41:56.122682    8395 logs.go:276] 0 containers: []
	W0617 04:41:56.122695    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:41:56.122755    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:41:56.132919    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:41:56.132935    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:41:56.132940    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:41:56.170656    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:41:56.170664    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:41:56.186064    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:41:56.186078    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:41:56.197035    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:41:56.197045    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:41:56.213938    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:41:56.213951    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:41:56.218392    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:41:56.218402    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:41:56.243412    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:41:56.243425    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:41:56.267130    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:41:56.267137    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:41:56.279254    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:41:56.279268    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:41:56.314716    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:41:56.314727    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:41:56.328255    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:41:56.328268    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:41:56.342313    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:41:56.342326    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:41:56.357054    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:41:56.357065    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:41:56.369143    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:41:56.369154    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:41:56.383810    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:41:56.383819    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:41:56.395387    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:41:56.395399    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:41:56.407161    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:41:56.407172    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:41:58.920432    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:03.922207    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:03.922354    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:03.936771    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:03.936852    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:03.950518    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:03.950591    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:03.960742    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:03.960810    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:03.971223    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:03.971284    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:03.981223    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:03.981288    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:03.991821    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:03.991889    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:04.001893    8395 logs.go:276] 0 containers: []
	W0617 04:42:04.001907    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:04.001961    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:04.011931    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:04.011949    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:04.011958    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:04.049084    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:04.049093    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:04.062915    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:04.062927    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:04.074552    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:04.074564    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:04.086295    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:04.086306    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:04.098242    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:04.098255    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:04.136365    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:04.136377    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:04.153209    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:04.153221    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:04.172754    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:04.172767    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:04.184285    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:04.184294    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:04.195643    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:04.195651    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:04.209149    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:04.209157    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:04.233276    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:04.233286    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:42:04.245039    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:04.245049    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:04.249281    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:04.249288    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:04.274325    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:04.274338    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:04.293209    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:04.293222    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:06.811439    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:11.814191    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:11.814301    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:11.828060    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:11.828123    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:11.838571    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:11.838638    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:11.849758    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:11.849827    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:11.860502    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:11.860567    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:11.872386    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:11.872452    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:11.883432    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:11.883500    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:11.894111    8395 logs.go:276] 0 containers: []
	W0617 04:42:11.894124    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:11.894175    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:11.904675    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:11.904696    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:11.904701    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:11.918782    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:11.918792    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:42:11.933011    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:11.933021    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:11.947193    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:11.947204    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:11.961994    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:11.962009    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:11.984632    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:11.984641    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:11.996438    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:11.996448    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:12.009078    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:12.009089    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:12.033314    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:12.033321    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:12.069547    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:12.069559    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:12.081664    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:12.081676    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:12.099047    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:12.099056    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:12.113236    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:12.113246    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:12.151481    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:12.151494    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:12.179222    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:12.179237    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:12.195309    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:12.195320    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:12.207252    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:12.207265    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:14.714300    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:19.716944    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:19.717081    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:19.728861    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:19.728942    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:19.740389    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:19.740462    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:19.751237    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:19.751303    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:19.762125    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:19.762193    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:19.772817    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:19.772883    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:19.783221    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:19.783286    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:19.793534    8395 logs.go:276] 0 containers: []
	W0617 04:42:19.793546    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:19.793605    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:19.803992    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:19.804007    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:19.804014    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:42:19.815747    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:19.815759    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:19.829979    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:19.829991    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:19.845804    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:19.845815    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:19.861766    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:19.861778    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:19.873163    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:19.873175    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:19.877617    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:19.877626    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:19.889202    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:19.889212    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:19.919659    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:19.919669    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:19.955497    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:19.955509    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:19.971274    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:19.971286    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:19.987325    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:19.987334    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:19.998663    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:19.998674    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:20.016189    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:20.016200    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:20.054113    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:20.054144    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:20.069732    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:20.069742    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:20.095888    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:20.095898    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:22.609708    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:27.612035    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:27.612390    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:27.643765    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:27.643892    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:27.662225    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:27.662314    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:27.676406    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:27.676480    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:27.687570    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:27.687635    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:27.698323    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:27.698394    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:27.709239    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:27.709301    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:27.724663    8395 logs.go:276] 0 containers: []
	W0617 04:42:27.724674    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:27.724726    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:27.735267    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:27.735283    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:27.735289    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:27.772678    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:27.772687    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:27.797859    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:27.797871    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:27.812433    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:27.812446    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:27.829165    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:27.829177    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:27.843763    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:27.843774    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:27.867012    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:27.867023    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:27.903283    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:27.903291    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:27.907772    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:27.907780    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:27.921709    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:27.921723    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:27.935753    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:27.935767    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:27.957645    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:27.957654    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:42:27.969560    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:27.969575    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:27.983677    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:27.983691    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:27.994991    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:27.995001    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:28.005930    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:28.005945    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:28.019862    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:28.019870    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:30.533921    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:35.536199    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:35.536353    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:35.550055    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:35.550124    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:35.560922    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:35.560994    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:35.571285    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:35.571345    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:35.581762    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:35.581831    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:35.592289    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:35.592361    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:35.602591    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:35.602657    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:35.613017    8395 logs.go:276] 0 containers: []
	W0617 04:42:35.613032    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:35.613094    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:35.625611    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:35.625629    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:35.625635    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:35.636958    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:35.636970    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:35.651532    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:35.651542    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:35.655914    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:35.655919    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:35.667523    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:35.667535    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:35.681826    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:35.681836    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:35.717768    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:35.717777    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:35.743884    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:35.743894    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:35.755335    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:35.755346    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:35.775999    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:35.776014    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:35.793903    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:35.793914    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:35.819870    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:35.819878    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:42:35.831541    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:35.831552    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:35.866146    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:35.866159    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:35.881909    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:35.881920    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:35.893546    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:35.893557    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:35.904416    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:35.904427    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:38.418570    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:43.419891    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:43.420951    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:43.470347    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:43.470477    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:43.488118    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:43.488214    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:43.500822    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:43.500897    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:43.512261    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:43.512327    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:43.525802    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:43.525872    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:43.536271    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:43.536339    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:43.546027    8395 logs.go:276] 0 containers: []
	W0617 04:42:43.546039    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:43.546092    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:43.556954    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:43.556974    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:43.556980    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:43.570634    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:43.570648    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:43.582696    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:43.582708    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:43.608388    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:43.608398    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:42:43.621121    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:43.621136    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:43.658235    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:43.658243    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:43.662618    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:43.662628    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:43.676405    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:43.676442    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:43.690897    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:43.690909    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:43.716650    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:43.716665    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:43.729301    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:43.729312    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:43.740932    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:43.740947    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:43.751778    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:43.751790    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:43.763192    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:43.763203    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:43.798997    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:43.799008    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:43.816633    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:43.816645    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:43.833360    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:43.833370    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:46.349120    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:51.351347    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:51.351502    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:51.364840    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:51.364920    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:51.379055    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:51.379127    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:51.389984    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:51.390054    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:51.400598    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:51.400665    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:51.410948    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:51.411012    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:51.422060    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:51.422128    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:51.435540    8395 logs.go:276] 0 containers: []
	W0617 04:42:51.435553    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:51.435609    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:51.446563    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:51.446585    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:51.446591    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:51.481417    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:51.481431    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:51.496362    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:51.496373    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:51.511854    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:51.511864    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:51.548666    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:51.548677    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:51.553203    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:51.553210    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:51.567652    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:51.567664    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:42:51.579425    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:51.579440    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:51.590813    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:51.590825    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:51.615557    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:51.615571    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:51.629699    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:51.629711    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:51.655044    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:51.655055    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:51.669138    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:51.669152    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:51.681139    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:51.681149    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:51.693356    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:51.693367    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:51.713736    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:51.713751    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:51.729692    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:51.729703    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:54.257983    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:42:59.260830    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:42:59.261753    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:42:59.301469    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:42:59.301611    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:42:59.322609    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:42:59.322697    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:42:59.337102    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:42:59.337186    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:42:59.349462    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:42:59.349538    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:42:59.360343    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:42:59.360408    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:42:59.370856    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:42:59.370930    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:42:59.381007    8395 logs.go:276] 0 containers: []
	W0617 04:42:59.381019    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:42:59.381077    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:42:59.395733    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:42:59.395752    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:42:59.395757    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:42:59.409745    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:42:59.409758    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:42:59.420973    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:42:59.420987    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:42:59.433060    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:42:59.433077    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:42:59.457349    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:42:59.457356    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:42:59.461409    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:42:59.461416    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:42:59.494665    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:42:59.494677    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:42:59.509694    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:42:59.509708    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:42:59.524720    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:42:59.524732    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:42:59.538649    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:42:59.538660    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:42:59.554600    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:42:59.554612    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:42:59.591136    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:42:59.591144    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:42:59.616412    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:42:59.616422    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:42:59.628520    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:42:59.628532    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:42:59.645284    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:42:59.645296    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:42:59.660328    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:42:59.660340    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:42:59.671758    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:42:59.671768    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:02.186031    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:07.188757    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:07.189200    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:07.229035    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:07.229173    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:07.251771    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:07.251882    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:07.271936    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:07.272014    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:07.283296    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:07.283365    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:07.293486    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:07.293551    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:07.304063    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:07.304144    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:07.313567    8395 logs.go:276] 0 containers: []
	W0617 04:43:07.313579    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:07.313640    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:07.324084    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:07.324103    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:07.324109    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:07.338045    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:07.338058    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:07.362561    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:07.362574    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:07.367124    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:07.367133    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:07.380897    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:07.380909    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:07.394821    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:07.394840    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:07.406319    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:07.406333    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:07.424690    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:07.424705    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:07.449347    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:07.449357    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:07.486642    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:07.486652    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:07.498171    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:07.498183    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:07.509581    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:07.509594    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:07.524511    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:07.524521    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:07.536646    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:07.536659    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:07.572433    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:07.572445    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:07.591768    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:07.591781    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:07.603315    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:07.603324    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:10.120713    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:15.121929    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:15.122105    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:15.134812    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:15.134902    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:15.148966    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:15.149039    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:15.159524    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:15.159594    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:15.170436    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:15.170512    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:15.181465    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:15.181546    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:15.193824    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:15.193901    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:15.206129    8395 logs.go:276] 0 containers: []
	W0617 04:43:15.206143    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:15.206207    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:15.219254    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:15.219272    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:15.219278    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:15.224541    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:15.224554    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:15.269416    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:15.269434    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:15.286388    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:15.286403    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:15.311534    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:15.311556    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:15.343680    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:15.343695    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:15.372723    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:15.372744    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:15.413833    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:15.413854    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:15.443537    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:15.443558    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:15.464152    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:15.464171    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:15.489980    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:15.489992    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:15.505337    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:15.505353    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:15.524762    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:15.524776    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:15.537106    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:15.537120    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:15.552170    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:15.552182    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:15.563948    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:15.563960    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:15.575777    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:15.575791    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:18.094503    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:23.097165    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:23.097593    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:23.137291    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:23.137427    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:23.158543    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:23.158643    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:23.172848    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:23.172938    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:23.184974    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:23.185049    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:23.200704    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:23.200776    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:23.210984    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:23.211048    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:23.221073    8395 logs.go:276] 0 containers: []
	W0617 04:43:23.221085    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:23.221144    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:23.231485    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:23.231502    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:23.231507    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:23.247260    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:23.247272    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:23.258828    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:23.258839    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:23.263233    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:23.263240    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:23.298937    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:23.298952    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:23.314035    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:23.314047    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:23.328899    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:23.328910    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:23.340633    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:23.340643    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:23.357649    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:23.357659    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:23.381612    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:23.381621    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:23.418995    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:23.419005    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:23.449880    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:23.449890    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:23.461569    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:23.461582    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:23.475443    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:23.475454    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:23.490628    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:23.490640    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:23.505551    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:23.505561    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:23.517870    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:23.517879    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:26.031572    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:31.033748    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:31.033903    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:31.049371    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:31.049457    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:31.066629    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:31.066705    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:31.077231    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:31.077301    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:31.087678    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:31.087750    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:31.102627    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:31.102699    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:31.113782    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:31.113852    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:31.124028    8395 logs.go:276] 0 containers: []
	W0617 04:43:31.124040    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:31.124095    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:31.134963    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:31.134980    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:31.134985    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:31.148275    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:31.148290    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:31.159792    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:31.159802    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:31.175024    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:31.175038    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:31.187912    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:31.187924    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:31.207888    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:31.207902    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:31.226658    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:31.226673    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:31.268207    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:31.268220    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:31.273000    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:31.273009    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:31.310234    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:31.310245    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:31.322178    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:31.322190    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:31.346500    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:31.346510    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:31.361079    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:31.361090    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:31.387282    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:31.387293    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:31.401740    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:31.401751    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:31.413594    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:31.413608    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:31.432855    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:31.432866    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:33.954578    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:38.956839    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:38.957257    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:38.997183    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:38.997325    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:39.019343    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:39.019464    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:39.034836    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:39.034909    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:39.047788    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:39.047864    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:39.058829    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:39.058891    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:39.070081    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:39.070141    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:39.080750    8395 logs.go:276] 0 containers: []
	W0617 04:43:39.080763    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:39.080820    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:39.091078    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:39.091094    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:39.091100    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:39.095581    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:39.095590    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:39.109615    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:39.109624    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:39.124185    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:39.124202    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:39.139295    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:39.139307    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:39.162581    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:39.162588    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:39.199637    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:39.199645    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:39.233704    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:39.233719    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:39.248982    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:39.248994    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:39.260305    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:39.260318    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:39.275128    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:39.275139    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:39.286848    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:39.286860    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:39.298821    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:39.298832    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:39.324771    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:39.324783    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:39.338618    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:39.338626    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:39.350274    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:39.350285    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:39.369922    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:39.369933    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:41.883317    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:46.885979    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:46.886187    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:46.912878    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:46.912992    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:46.931247    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:46.931336    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:46.944310    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:46.944383    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:46.955734    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:46.955799    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:46.966128    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:46.966194    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:46.976482    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:46.976543    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:46.992604    8395 logs.go:276] 0 containers: []
	W0617 04:43:46.992615    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:46.992672    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:47.002835    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:47.002853    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:47.002859    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:47.017692    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:47.017704    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:47.031156    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:47.031168    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:47.044139    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:47.044149    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:47.079540    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:47.079550    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:47.091205    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:47.091216    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:47.106271    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:47.106284    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:47.129436    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:47.129446    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:47.154160    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:47.154169    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:47.175076    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:47.175090    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:47.196307    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:47.196317    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:47.207269    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:47.207280    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:47.224808    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:47.224821    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:47.239263    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:47.239273    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:47.276299    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:47.276308    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:47.280328    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:47.280333    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:47.291728    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:47.291738    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:49.805891    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:54.808161    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:54.808527    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:54.846974    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:54.847102    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:54.871889    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:54.871994    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:54.886232    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:54.886304    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:54.898101    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:54.898170    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:54.908465    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:54.908539    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:54.922041    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:54.922113    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:54.932091    8395 logs.go:276] 0 containers: []
	W0617 04:43:54.932104    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:54.932151    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:54.942657    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:54.942674    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:54.942680    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:54.982884    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:54.982896    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:54.996234    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:54.996247    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:55.012284    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:55.012299    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:55.025066    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:55.025078    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:55.040315    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:55.040327    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:55.071227    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:55.071241    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:55.087055    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:55.087067    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:55.103368    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:55.103381    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:55.119245    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:55.119256    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:55.131744    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:55.131757    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:55.136401    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:55.136409    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:55.156072    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:55.156090    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:55.194217    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:55.194239    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:55.208864    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:55.208877    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:55.220873    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:55.220883    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:55.231862    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:55.231875    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:57.756353    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:02.758968    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:02.759175    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:02.785027    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:44:02.785150    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:02.803528    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:44:02.803609    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:02.819927    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:44:02.819999    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:02.831375    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:44:02.831442    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:02.842072    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:44:02.842143    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:02.852694    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:44:02.852753    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:02.863198    8395 logs.go:276] 0 containers: []
	W0617 04:44:02.863208    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:02.863254    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:02.873427    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:44:02.873448    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:44:02.873454    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:44:02.898337    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:44:02.898347    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:44:02.912202    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:44:02.912214    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:44:02.929635    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:44:02.929649    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:44:02.943768    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:02.943779    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:02.981438    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:02.981447    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:02.985462    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:44:02.985470    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:44:02.997023    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:44:02.997034    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:44:03.012595    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:44:03.012608    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:44:03.030101    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:44:03.030112    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:44:03.044106    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:44:03.044118    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:44:03.062530    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:44:03.062543    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:44:03.077348    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:03.077357    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:03.110877    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:44:03.110894    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:44:03.122122    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:44:03.122132    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:44:03.134028    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:03.134042    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:03.155979    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:44:03.155985    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:05.675328    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:10.677766    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:10.677874    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:10.690609    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:44:10.690684    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:10.701661    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:44:10.701732    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:10.712267    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:44:10.712336    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:10.724110    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:44:10.724185    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:10.734361    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:44:10.734426    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:10.745326    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:44:10.745395    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:10.755703    8395 logs.go:276] 0 containers: []
	W0617 04:44:10.755719    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:10.755775    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:10.767721    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:44:10.767739    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:44:10.767744    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:44:10.778689    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:44:10.778702    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:44:10.793027    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:10.793038    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:10.833847    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:44:10.833858    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:44:10.851293    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:44:10.851301    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:44:10.862525    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:44:10.862537    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:44:10.874122    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:44:10.874138    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:44:10.887801    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:44:10.887810    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:44:10.902748    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:44:10.902765    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:44:10.914755    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:44:10.914765    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:44:10.933030    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:44:10.933041    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:44:10.944902    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:10.944911    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:10.967147    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:44:10.967156    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:44:10.993160    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:10.993170    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:11.050281    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:44:11.050292    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:44:11.063530    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:44:11.063540    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:11.076883    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:11.076893    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:13.583518    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:18.584713    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:18.584757    8395 kubeadm.go:591] duration metric: took 4m4.28340175s to restartPrimaryControlPlane
	W0617 04:44:18.584791    8395 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 04:44:18.584807    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0617 04:44:19.589955    8395 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005148292s)
	I0617 04:44:19.590023    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 04:44:19.594853    8395 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 04:44:19.597471    8395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 04:44:19.600560    8395 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 04:44:19.600566    8395 kubeadm.go:156] found existing configuration files:
	
	I0617 04:44:19.600591    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/admin.conf
	I0617 04:44:19.603687    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 04:44:19.603711    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 04:44:19.606376    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/kubelet.conf
	I0617 04:44:19.608725    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 04:44:19.608750    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 04:44:19.611615    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/controller-manager.conf
	I0617 04:44:19.614324    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 04:44:19.614344    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 04:44:19.616811    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/scheduler.conf
	I0617 04:44:19.619653    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 04:44:19.619674    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 04:44:19.622137    8395 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 04:44:19.638454    8395 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0617 04:44:19.638483    8395 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 04:44:19.685224    8395 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 04:44:19.685280    8395 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 04:44:19.685329    8395 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 04:44:19.739501    8395 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 04:44:19.746452    8395 out.go:204]   - Generating certificates and keys ...
	I0617 04:44:19.746484    8395 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 04:44:19.746515    8395 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 04:44:19.746550    8395 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 04:44:19.746582    8395 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 04:44:19.746622    8395 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 04:44:19.746656    8395 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 04:44:19.746692    8395 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 04:44:19.746727    8395 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 04:44:19.746764    8395 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 04:44:19.746804    8395 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 04:44:19.746821    8395 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 04:44:19.746854    8395 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 04:44:19.905499    8395 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 04:44:20.058851    8395 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 04:44:20.242351    8395 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 04:44:20.286744    8395 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 04:44:20.317756    8395 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 04:44:20.318143    8395 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 04:44:20.318200    8395 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 04:44:20.397246    8395 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 04:44:20.405353    8395 out.go:204]   - Booting up control plane ...
	I0617 04:44:20.405454    8395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 04:44:20.405498    8395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 04:44:20.405531    8395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 04:44:20.405705    8395 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 04:44:20.405822    8395 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 04:44:25.411874    8395 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.010209 seconds
	I0617 04:44:25.411958    8395 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 04:44:25.417398    8395 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 04:44:25.925514    8395 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 04:44:25.925617    8395 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-857000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 04:44:26.429050    8395 kubeadm.go:309] [bootstrap-token] Using token: yu7u84.ui5p86jwwxs7u8th
	I0617 04:44:26.435714    8395 out.go:204]   - Configuring RBAC rules ...
	I0617 04:44:26.435780    8395 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 04:44:26.435830    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 04:44:26.441340    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 04:44:26.442231    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 04:44:26.443092    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 04:44:26.443882    8395 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 04:44:26.447073    8395 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 04:44:26.627748    8395 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 04:44:26.832970    8395 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 04:44:26.833531    8395 kubeadm.go:309] 
	I0617 04:44:26.833561    8395 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 04:44:26.833565    8395 kubeadm.go:309] 
	I0617 04:44:26.833613    8395 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 04:44:26.833616    8395 kubeadm.go:309] 
	I0617 04:44:26.833640    8395 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 04:44:26.833671    8395 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 04:44:26.833705    8395 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 04:44:26.833708    8395 kubeadm.go:309] 
	I0617 04:44:26.833731    8395 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 04:44:26.833754    8395 kubeadm.go:309] 
	I0617 04:44:26.833776    8395 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 04:44:26.833778    8395 kubeadm.go:309] 
	I0617 04:44:26.833811    8395 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 04:44:26.833850    8395 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 04:44:26.833893    8395 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 04:44:26.833896    8395 kubeadm.go:309] 
	I0617 04:44:26.833933    8395 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 04:44:26.833972    8395 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 04:44:26.833976    8395 kubeadm.go:309] 
	I0617 04:44:26.834018    8395 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yu7u84.ui5p86jwwxs7u8th \
	I0617 04:44:26.834075    8395 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb \
	I0617 04:44:26.834088    8395 kubeadm.go:309] 	--control-plane 
	I0617 04:44:26.834091    8395 kubeadm.go:309] 
	I0617 04:44:26.834138    8395 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 04:44:26.834145    8395 kubeadm.go:309] 
	I0617 04:44:26.834184    8395 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yu7u84.ui5p86jwwxs7u8th \
	I0617 04:44:26.834250    8395 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb 
	I0617 04:44:26.834310    8395 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 04:44:26.834319    8395 cni.go:84] Creating CNI manager for ""
	I0617 04:44:26.834327    8395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:44:26.837393    8395 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 04:44:26.843370    8395 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 04:44:26.846238    8395 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 04:44:26.851076    8395 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 04:44:26.851131    8395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 04:44:26.851139    8395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-857000 minikube.k8s.io/updated_at=2024_06_17T04_44_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=84fc08e1aa3123a23ee19b25404b578b39fd2f91 minikube.k8s.io/name=running-upgrade-857000 minikube.k8s.io/primary=true
	I0617 04:44:26.894197    8395 ops.go:34] apiserver oom_adj: -16
	I0617 04:44:26.894197    8395 kubeadm.go:1107] duration metric: took 43.098ms to wait for elevateKubeSystemPrivileges
	W0617 04:44:26.894241    8395 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 04:44:26.894248    8395 kubeadm.go:393] duration metric: took 4m12.607426s to StartCluster
	I0617 04:44:26.894258    8395 settings.go:142] acquiring lock: {Name:mkdf59d9cf591c81341c913869983ffa33afef47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:44:26.894443    8395 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:44:26.894808    8395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/kubeconfig: {Name:mk50fd79b579920a7f11ac34f212a8491ceefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:44:26.895026    8395 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:44:26.899365    8395 out.go:177] * Verifying Kubernetes components...
	I0617 04:44:26.895038    8395 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 04:44:26.895092    8395 config.go:182] Loaded profile config "running-upgrade-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:44:26.907184    8395 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-857000"
	I0617 04:44:26.907187    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:44:26.907199    8395 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-857000"
	I0617 04:44:26.907203    8395 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-857000"
	W0617 04:44:26.907205    8395 addons.go:243] addon storage-provisioner should already be in state true
	I0617 04:44:26.907213    8395 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-857000"
	I0617 04:44:26.907217    8395 host.go:66] Checking if "running-upgrade-857000" exists ...
	I0617 04:44:26.908388    8395 kapi.go:59] client config for running-upgrade-857000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104280460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 04:44:26.908508    8395 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-857000"
	W0617 04:44:26.908514    8395 addons.go:243] addon default-storageclass should already be in state true
	I0617 04:44:26.908523    8395 host.go:66] Checking if "running-upgrade-857000" exists ...
	I0617 04:44:26.913284    8395 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:44:26.916393    8395 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:44:26.916402    8395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 04:44:26.916411    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	I0617 04:44:26.917148    8395 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 04:44:26.917152    8395 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 04:44:26.917156    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	I0617 04:44:27.001576    8395 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 04:44:27.006630    8395 api_server.go:52] waiting for apiserver process to appear ...
	I0617 04:44:27.006671    8395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:44:27.010476    8395 api_server.go:72] duration metric: took 115.440041ms to wait for apiserver process to appear ...
	I0617 04:44:27.010484    8395 api_server.go:88] waiting for apiserver healthz status ...
	I0617 04:44:27.010491    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:27.037479    8395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 04:44:27.045271    8395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:44:32.012634    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:32.012678    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:37.012981    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:37.013010    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:42.013398    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:42.013442    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:47.014062    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:47.014099    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:52.014742    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:52.014792    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:57.015848    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:57.015907    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0617 04:44:57.396608    8395 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0617 04:44:57.405937    8395 out.go:177] * Enabled addons: storage-provisioner
	I0617 04:44:57.414072    8395 addons.go:510] duration metric: took 30.5193445s for enable addons: enabled=[storage-provisioner]
	I0617 04:45:02.017193    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:02.017229    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:07.019256    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:07.019311    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:12.021425    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:12.021461    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:17.023629    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:17.023659    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:22.025119    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:22.025193    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:27.026950    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:27.027044    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:27.038179    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:27.038253    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:27.048411    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:27.048472    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:27.058715    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:27.058787    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:27.068937    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:27.069002    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:27.079346    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:27.079416    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:27.090366    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:27.090437    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:27.100620    8395 logs.go:276] 0 containers: []
	W0617 04:45:27.100632    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:27.100689    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:27.110878    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:27.110892    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:27.110898    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:27.122170    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:27.122183    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:27.136140    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:27.136153    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:27.151224    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:27.151237    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:27.168529    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:27.168540    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:27.188114    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:27.188138    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:27.199950    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:27.199960    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:27.211714    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:27.211723    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:27.223591    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:27.223603    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:27.246801    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:27.246809    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:27.282293    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:27.282300    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:27.286399    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:27.286406    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:27.321130    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:27.321143    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:29.837152    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:34.839480    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:34.839641    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:34.857132    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:34.857215    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:34.871068    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:34.871136    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:34.882319    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:34.882387    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:34.892948    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:34.893012    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:34.903558    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:34.903624    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:34.914117    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:34.914186    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:34.924723    8395 logs.go:276] 0 containers: []
	W0617 04:45:34.924735    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:34.924789    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:34.935575    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:34.935591    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:34.935597    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:34.940201    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:34.940209    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:34.956562    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:34.956576    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:34.968690    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:34.968704    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:34.987354    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:34.987365    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:34.998952    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:34.998963    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:35.023506    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:35.023517    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:35.062701    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:35.062720    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:35.098833    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:35.098849    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:35.113598    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:35.113612    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:35.124784    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:35.124796    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:35.139314    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:35.139324    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:35.154906    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:35.154917    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:37.668346    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:42.670649    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:42.670857    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:42.696193    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:42.696278    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:42.711068    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:42.711136    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:42.729619    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:42.729689    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:42.740436    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:42.740500    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:42.752786    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:42.752859    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:42.763012    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:42.763081    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:42.773092    8395 logs.go:276] 0 containers: []
	W0617 04:45:42.773103    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:42.773151    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:42.783613    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:42.783628    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:42.783634    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:42.795234    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:42.795244    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:42.799693    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:42.799700    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:42.833952    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:42.833964    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:42.849289    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:42.849303    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:42.863337    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:42.863350    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:42.877331    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:42.877341    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:42.889316    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:42.889330    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:42.925523    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:42.925531    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:42.937332    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:42.937343    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:42.953013    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:42.953027    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:42.970938    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:42.970951    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:42.982357    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:42.982370    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:45.507724    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:50.510048    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:50.510307    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:50.541371    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:50.541500    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:50.560313    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:50.560403    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:50.573822    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:50.573891    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:50.585345    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:50.585416    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:50.596164    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:50.596242    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:50.606907    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:50.606978    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:50.626019    8395 logs.go:276] 0 containers: []
	W0617 04:45:50.626030    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:50.626087    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:50.637035    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:50.637054    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:50.637060    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:50.651089    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:50.651103    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:50.663507    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:50.663519    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:50.678469    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:50.678482    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:50.700718    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:50.700729    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:50.712697    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:50.712710    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:50.724108    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:50.724119    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:50.735599    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:50.735610    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:50.759579    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:50.759586    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:50.797416    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:50.797433    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:50.801935    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:50.801944    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:50.839843    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:50.839854    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:50.854383    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:50.854392    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:53.368353    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:58.369312    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:58.369568    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:58.391073    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:58.391169    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:58.407124    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:58.407205    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:58.419558    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:58.419629    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:58.430593    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:58.430660    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:58.440830    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:58.440903    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:58.451527    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:58.451594    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:58.462389    8395 logs.go:276] 0 containers: []
	W0617 04:45:58.462402    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:58.462460    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:58.472950    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:58.472966    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:58.472972    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:58.487788    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:58.487801    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:58.499617    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:58.499628    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:58.517365    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:58.517376    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:58.529089    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:58.529103    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:58.544177    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:58.544191    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:58.555604    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:58.555615    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:58.566879    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:58.566890    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:58.582088    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:58.582100    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:58.621954    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:58.621966    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:58.627334    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:58.627344    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:58.662596    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:58.662609    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:58.677574    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:58.677587    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:01.206303    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:06.208668    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:06.208958    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:06.234657    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:06.234782    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:06.252418    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:06.252496    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:06.265288    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:06.265363    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:06.276899    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:06.276975    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:06.287577    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:06.287650    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:06.298343    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:06.298412    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:06.308510    8395 logs.go:276] 0 containers: []
	W0617 04:46:06.308522    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:06.308581    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:06.319287    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:06.319301    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:06.319306    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:06.330766    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:06.330777    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:06.335150    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:06.335157    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:06.346184    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:06.346198    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:06.361092    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:06.361101    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:06.372341    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:06.372350    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:06.393998    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:06.394012    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:06.418746    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:06.418754    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:06.456444    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:06.456451    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:06.495616    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:06.495629    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:06.509596    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:06.509610    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:06.523068    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:06.523078    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:06.534803    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:06.534816    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:09.048086    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:14.050385    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:14.050782    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:14.088309    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:14.088434    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:14.108381    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:14.108463    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:14.123871    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:14.123953    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:14.135827    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:14.135901    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:14.147381    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:14.147450    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:14.165903    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:14.165980    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:14.179949    8395 logs.go:276] 0 containers: []
	W0617 04:46:14.179962    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:14.180023    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:14.190039    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:14.190054    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:14.190062    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:14.204122    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:14.204133    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:14.228916    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:14.228930    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:14.244355    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:14.244368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:14.262215    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:14.262227    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:14.273648    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:14.273662    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:14.285301    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:14.285312    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:14.309616    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:14.309626    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:14.321553    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:14.321566    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:14.359599    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:14.359606    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:14.363842    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:14.363849    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:14.398500    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:14.398511    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:14.412843    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:14.412858    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:16.926427    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:21.928793    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:21.929002    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:21.957532    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:21.957642    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:21.976371    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:21.976439    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:21.988626    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:21.988692    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:21.999090    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:21.999152    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:22.016263    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:22.016338    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:22.026651    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:22.026716    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:22.036795    8395 logs.go:276] 0 containers: []
	W0617 04:46:22.036807    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:22.036869    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:22.047091    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:22.047107    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:22.047115    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:22.062276    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:22.062287    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:22.077316    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:22.077329    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:22.094990    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:22.095003    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:22.106549    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:22.106559    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:22.144328    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:22.144338    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:22.148555    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:22.148562    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:22.183225    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:22.183239    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:22.197487    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:22.197498    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:22.210467    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:22.210480    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:22.229328    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:22.229342    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:22.241254    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:22.241264    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:22.253477    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:22.253487    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:24.779344    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:29.781673    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:29.781821    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:29.797641    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:29.797728    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:29.812504    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:29.812576    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:29.824153    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:29.824215    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:29.834951    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:29.835019    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:29.845824    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:29.845896    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:29.860637    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:29.860706    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:29.870501    8395 logs.go:276] 0 containers: []
	W0617 04:46:29.870514    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:29.870584    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:29.880785    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:29.880801    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:29.880806    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:29.892328    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:29.892342    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:29.926500    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:29.926526    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:29.938389    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:29.938400    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:29.953591    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:29.953600    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:29.972585    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:29.972595    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:29.984312    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:29.984326    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:30.004661    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:30.004674    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:30.027692    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:30.027701    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:30.063578    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:30.063585    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:30.067932    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:30.067938    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:30.081891    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:30.081905    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:30.096282    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:30.096292    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:32.608603    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:37.609392    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:37.609600    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:37.637536    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:37.637657    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:37.654626    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:37.654708    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:37.668186    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:37.668261    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:37.679877    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:37.679949    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:37.691531    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:37.691608    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:37.701683    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:37.701751    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:37.711557    8395 logs.go:276] 0 containers: []
	W0617 04:46:37.711570    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:37.711630    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:37.721811    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:37.721827    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:37.721833    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:37.726394    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:37.726404    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:37.740567    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:37.740577    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:37.752202    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:37.752216    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:37.763392    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:37.763405    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:37.778241    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:37.778250    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:37.795735    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:37.795745    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:37.807927    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:37.807941    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:37.845595    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:37.845606    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:37.903313    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:37.903326    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:37.932273    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:37.932286    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:37.960629    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:37.960646    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:37.997943    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:37.997957    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:40.529582    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:45.531817    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:45.531935    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:45.543279    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:45.543349    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:45.553734    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:45.553802    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:45.564739    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:46:45.564815    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:45.575813    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:45.575881    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:45.586503    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:45.586579    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:45.596904    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:45.596997    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:45.608468    8395 logs.go:276] 0 containers: []
	W0617 04:46:45.608479    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:45.608533    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:45.619856    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:45.619875    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:45.619881    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:45.633580    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:46:45.633594    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:46:45.644463    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:45.644473    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:45.656426    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:45.656440    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:45.667832    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:45.667845    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:45.685460    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:45.685472    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:45.729833    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:46:45.729847    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:46:45.741640    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:45.741652    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:45.755698    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:45.755711    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:45.770665    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:45.770676    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:45.807346    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:45.807354    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:45.811691    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:45.811697    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:45.836237    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:45.836247    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:45.847690    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:45.847701    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:45.859998    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:45.860012    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:48.373758    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:53.376054    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:53.376509    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:53.420903    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:53.421030    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:53.439875    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:53.439971    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:53.454806    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:46:53.454883    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:53.467055    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:53.467129    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:53.484378    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:53.484443    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:53.495313    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:53.495384    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:53.511585    8395 logs.go:276] 0 containers: []
	W0617 04:46:53.511597    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:53.511657    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:53.525522    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:53.525541    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:53.525547    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:53.530469    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:53.530476    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:53.542476    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:53.542488    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:53.560066    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:46:53.560077    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:46:53.571571    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:53.571582    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:53.584244    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:53.584258    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:53.622147    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:53.622161    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:53.658627    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:53.658640    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:53.673641    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:53.673651    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:53.688326    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:46:53.688336    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:46:53.699672    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:53.699686    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:53.711419    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:53.711431    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:53.735070    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:53.735078    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:53.746338    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:53.746349    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:53.761270    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:53.761281    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:56.274924    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:01.277499    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:01.277734    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:01.304076    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:01.304201    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:01.322017    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:01.322102    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:01.336353    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:01.336429    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:01.347914    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:01.347992    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:01.359449    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:01.359523    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:01.371238    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:01.371304    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:01.381727    8395 logs.go:276] 0 containers: []
	W0617 04:47:01.381738    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:01.381789    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:01.392285    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:01.392315    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:01.392323    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:01.407296    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:01.407311    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:01.419320    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:01.419331    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:01.433620    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:01.433631    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:01.452641    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:01.452651    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:01.463533    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:01.463544    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:01.481029    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:01.481041    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:01.498812    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:01.498823    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:01.510855    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:01.510865    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:01.550282    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:01.550293    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:01.586908    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:01.586918    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:01.591559    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:01.591566    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:01.603677    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:01.603690    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:01.628278    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:01.628289    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:01.642354    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:01.642368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:04.159552    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:09.161801    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:09.161939    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:09.174068    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:09.174139    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:09.184747    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:09.184822    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:09.195139    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:09.195212    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:09.205792    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:09.205866    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:09.217070    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:09.217140    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:09.227421    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:09.227481    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:09.238039    8395 logs.go:276] 0 containers: []
	W0617 04:47:09.238048    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:09.238096    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:09.248741    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:09.248755    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:09.248760    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:09.286409    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:09.286421    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:09.323998    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:09.324012    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:09.342851    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:09.342865    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:09.354710    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:09.354722    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:09.359467    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:09.359476    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:09.370857    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:09.370867    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:09.382435    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:09.382447    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:09.399585    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:09.399595    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:09.411736    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:09.411746    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:09.423112    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:09.423122    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:09.440044    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:09.440055    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:09.451928    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:09.451938    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:09.469844    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:09.469858    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:09.481940    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:09.481951    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:12.008126    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:17.010523    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:17.010662    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:17.022218    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:17.022285    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:17.033407    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:17.033478    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:17.044598    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:17.044666    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:17.055941    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:17.056002    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:17.067750    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:17.067823    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:17.080479    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:17.080548    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:17.090817    8395 logs.go:276] 0 containers: []
	W0617 04:47:17.090831    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:17.090888    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:17.102214    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:17.102231    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:17.102237    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:17.140093    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:17.140107    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:17.158195    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:17.158205    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:17.174905    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:17.174918    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:17.189438    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:17.189448    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:17.201653    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:17.201664    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:17.217495    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:17.217507    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:17.241007    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:17.241014    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:17.245292    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:17.245301    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:17.257908    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:17.257921    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:17.270102    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:17.270112    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:17.282301    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:17.282314    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:17.318998    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:17.319005    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:17.333841    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:17.333849    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:17.349071    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:17.349085    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:19.863117    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:24.865541    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:24.865704    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:24.883478    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:24.883566    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:24.897548    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:24.897628    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:24.910190    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:24.910263    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:24.921013    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:24.921082    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:24.930976    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:24.931043    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:24.942205    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:24.942275    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:24.956266    8395 logs.go:276] 0 containers: []
	W0617 04:47:24.956278    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:24.956330    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:24.967020    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:24.967037    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:24.967042    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:24.984298    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:24.984308    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:24.995646    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:24.995657    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:25.018968    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:25.018976    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:25.036439    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:25.036449    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:25.071830    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:25.071841    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:25.076135    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:25.076141    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:25.090649    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:25.090662    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:25.104610    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:25.104621    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:25.117292    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:25.117303    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:25.154018    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:25.154031    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:25.165686    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:25.165700    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:25.178029    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:25.178041    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:25.189634    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:25.189643    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:25.201521    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:25.201533    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:27.718052    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:32.719744    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:32.719865    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:32.736516    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:32.736594    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:32.753508    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:32.753585    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:32.764586    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:32.764655    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:32.774859    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:32.774925    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:32.785886    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:32.785957    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:32.796144    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:32.796218    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:32.806423    8395 logs.go:276] 0 containers: []
	W0617 04:47:32.806445    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:32.806499    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:32.816847    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:32.816865    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:32.816872    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:32.852438    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:32.852451    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:32.869411    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:32.869424    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:32.906681    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:32.906688    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:32.911085    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:32.911091    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:32.926497    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:32.926507    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:32.938602    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:32.938614    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:32.953245    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:32.953255    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:32.965375    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:32.965386    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:32.977287    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:32.977297    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:32.992901    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:32.992911    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:33.017650    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:33.017659    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:33.030057    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:33.030068    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:33.042381    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:33.042391    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:33.053774    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:33.053785    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:35.570523    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:40.572827    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:40.572939    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:40.585579    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:40.585654    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:40.597122    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:40.597194    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:40.608650    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:40.608719    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:40.622352    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:40.622423    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:40.633799    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:40.633875    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:40.644769    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:40.644837    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:40.655798    8395 logs.go:276] 0 containers: []
	W0617 04:47:40.655812    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:40.655877    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:40.666710    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:40.666731    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:40.666738    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:40.708443    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:40.708463    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:40.713377    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:40.713389    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:40.737946    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:40.737957    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:40.750477    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:40.750489    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:40.776034    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:40.776049    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:40.798508    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:40.798525    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:40.815663    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:40.815687    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:40.828361    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:40.828375    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:40.865778    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:40.865791    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:40.884154    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:40.884167    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:40.896385    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:40.896395    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:40.908461    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:40.908473    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:40.930355    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:40.930368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:40.946954    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:40.946967    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:43.463789    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:48.466012    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:48.466192    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:48.478392    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:48.478465    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:48.489118    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:48.489188    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:48.500005    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:48.500085    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:48.510346    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:48.510427    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:48.520714    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:48.520773    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:48.531127    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:48.531190    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:48.541655    8395 logs.go:276] 0 containers: []
	W0617 04:47:48.541666    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:48.541722    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:48.556447    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:48.556465    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:48.556471    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:48.568130    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:48.568140    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:48.583189    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:48.583200    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:48.600931    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:48.600943    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:48.624583    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:48.624594    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:48.638394    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:48.638403    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:48.653677    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:48.653688    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:48.665587    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:48.665600    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:48.682090    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:48.682103    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:48.687263    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:48.687270    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:48.700669    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:48.700680    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:48.712328    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:48.712339    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:48.750663    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:48.750682    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:48.765928    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:48.765940    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:48.777636    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:48.777646    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:51.314844    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:56.315978    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:56.316149    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:56.328840    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:56.328912    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:56.339230    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:56.339304    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:56.350004    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:56.350073    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:56.360773    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:56.360844    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:56.370843    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:56.370910    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:56.381885    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:56.381963    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:56.392827    8395 logs.go:276] 0 containers: []
	W0617 04:47:56.392838    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:56.392897    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:56.403104    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:56.403121    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:56.403127    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:56.407635    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:56.407641    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:56.419357    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:56.419368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:56.431203    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:56.431214    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:56.442747    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:56.442757    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:56.455695    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:56.455706    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:56.467902    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:56.467918    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:56.480253    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:56.480267    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:56.497933    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:56.497945    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:56.522720    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:56.522735    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:56.542630    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:56.542644    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:56.557643    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:56.557654    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:56.596260    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:56.596272    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:56.633299    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:56.633313    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:56.648642    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:56.648656    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:59.162036    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:04.164279    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:04.164396    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:48:04.176168    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:48:04.176244    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:48:04.187400    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:48:04.187472    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:48:04.197892    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:48:04.197963    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:48:04.208468    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:48:04.208536    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:48:04.219225    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:48:04.219293    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:48:04.229829    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:48:04.229895    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:48:04.240234    8395 logs.go:276] 0 containers: []
	W0617 04:48:04.240249    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:48:04.240310    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:48:04.254181    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:48:04.254201    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:48:04.254206    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:48:04.265983    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:48:04.265996    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:48:04.283409    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:48:04.283419    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:48:04.294978    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:48:04.294987    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:48:04.299493    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:48:04.299502    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:48:04.313916    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:48:04.313928    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:48:04.325685    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:48:04.325694    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:48:04.340252    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:48:04.340265    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:48:04.351639    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:48:04.351649    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:48:04.366317    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:48:04.366330    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:48:04.405235    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:48:04.405250    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:48:04.417477    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:48:04.417487    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:48:04.455722    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:48:04.455734    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:48:04.467796    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:48:04.467807    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:48:04.490917    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:48:04.490929    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:48:07.004935    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:12.007209    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:12.007461    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:48:12.027966    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:48:12.028063    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:48:12.042348    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:48:12.042418    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:48:12.054031    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:48:12.054105    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:48:12.064903    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:48:12.064964    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:48:12.076113    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:48:12.076185    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:48:12.087450    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:48:12.087521    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:48:12.098021    8395 logs.go:276] 0 containers: []
	W0617 04:48:12.098034    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:48:12.098088    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:48:12.109052    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:48:12.109067    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:48:12.109072    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:48:12.120525    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:48:12.120541    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:48:12.145703    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:48:12.145716    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:48:12.179629    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:48:12.179643    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:48:12.196737    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:48:12.196747    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:48:12.211485    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:48:12.211496    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:48:12.223587    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:48:12.223597    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:48:12.235351    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:48:12.235362    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:48:12.253116    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:48:12.253136    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:48:12.264972    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:48:12.264985    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:48:12.269332    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:48:12.269342    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:48:12.280803    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:48:12.280820    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:48:12.317907    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:48:12.317916    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:48:12.332396    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:48:12.332408    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:48:12.344414    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:48:12.344426    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:48:14.858390    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:19.860643    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:19.860781    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:48:19.874452    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:48:19.874532    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:48:19.885087    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:48:19.885165    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:48:19.896397    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:48:19.896472    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:48:19.906816    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:48:19.906884    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:48:19.916973    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:48:19.917037    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:48:19.927589    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:48:19.927646    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:48:19.938374    8395 logs.go:276] 0 containers: []
	W0617 04:48:19.938392    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:48:19.938457    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:48:19.949321    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:48:19.949338    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:48:19.949344    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:48:19.963663    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:48:19.963676    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:48:19.975318    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:48:19.975329    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:48:19.989957    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:48:19.989970    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:48:20.001427    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:48:20.001439    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:48:20.018570    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:48:20.018582    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:48:20.056931    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:48:20.056942    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:48:20.061620    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:48:20.061626    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:48:20.097981    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:48:20.097994    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:48:20.110293    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:48:20.110307    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:48:20.134472    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:48:20.134485    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:48:20.151587    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:48:20.151601    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:48:20.162897    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:48:20.162908    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:48:20.174291    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:48:20.174306    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:48:20.190366    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:48:20.190379    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:48:22.703218    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:27.705571    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:27.710021    8395 out.go:177] 
	W0617 04:48:27.714009    8395 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0617 04:48:27.714019    8395 out.go:239] * 
	* 
	W0617 04:48:27.714738    8395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:48:27.725830    8395 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-857000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-06-17 04:48:27.821167 -0700 PDT m=+1333.966176209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-857000 -n running-upgrade-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-857000 -n running-upgrade-857000: exit status 2 (15.710248417s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-857000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-192000          | force-systemd-flag-192000 | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-389000              | force-systemd-env-389000  | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-389000           | force-systemd-env-389000  | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT | 17 Jun 24 04:38 PDT |
	| start   | -p docker-flags-458000                | docker-flags-458000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-192000             | force-systemd-flag-192000 | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-192000          | force-systemd-flag-192000 | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT | 17 Jun 24 04:38 PDT |
	| start   | -p cert-expiration-317000             | cert-expiration-317000    | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-458000 ssh               | docker-flags-458000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-458000 ssh               | docker-flags-458000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-458000                | docker-flags-458000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT | 17 Jun 24 04:38 PDT |
	| start   | -p cert-options-907000                | cert-options-907000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-907000 ssh               | cert-options-907000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-907000 -- sudo        | cert-options-907000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-907000                | cert-options-907000       | jenkins | v1.33.1 | 17 Jun 24 04:38 PDT | 17 Jun 24 04:38 PDT |
	| start   | -p running-upgrade-857000             | minikube                  | jenkins | v1.26.0 | 17 Jun 24 04:38 PDT | 17 Jun 24 04:40 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-857000             | running-upgrade-857000    | jenkins | v1.33.1 | 17 Jun 24 04:40 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-317000             | cert-expiration-317000    | jenkins | v1.33.1 | 17 Jun 24 04:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-317000             | cert-expiration-317000    | jenkins | v1.33.1 | 17 Jun 24 04:41 PDT | 17 Jun 24 04:41 PDT |
	| start   | -p kubernetes-upgrade-972000          | kubernetes-upgrade-972000 | jenkins | v1.33.1 | 17 Jun 24 04:41 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-972000          | kubernetes-upgrade-972000 | jenkins | v1.33.1 | 17 Jun 24 04:42 PDT | 17 Jun 24 04:42 PDT |
	| start   | -p kubernetes-upgrade-972000          | kubernetes-upgrade-972000 | jenkins | v1.33.1 | 17 Jun 24 04:42 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-972000          | kubernetes-upgrade-972000 | jenkins | v1.33.1 | 17 Jun 24 04:42 PDT | 17 Jun 24 04:42 PDT |
	| start   | -p stopped-upgrade-767000             | minikube                  | jenkins | v1.26.0 | 17 Jun 24 04:42 PDT | 17 Jun 24 04:42 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-767000 stop           | minikube                  | jenkins | v1.26.0 | 17 Jun 24 04:42 PDT | 17 Jun 24 04:43 PDT |
	| start   | -p stopped-upgrade-767000             | stopped-upgrade-767000    | jenkins | v1.33.1 | 17 Jun 24 04:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 04:43:04
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 04:43:04.465153    8538 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:43:04.465315    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:43:04.465319    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:43:04.465322    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:43:04.465506    8538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:43:04.466703    8538 out.go:298] Setting JSON to false
	I0617 04:43:04.486062    8538 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4354,"bootTime":1718620230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:43:04.486174    8538 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:43:04.491801    8538 out.go:177] * [stopped-upgrade-767000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:43:04.498766    8538 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:43:04.501785    8538 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:43:04.498838    8538 notify.go:220] Checking for updates...
	I0617 04:43:04.509733    8538 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:43:04.512802    8538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:43:04.514196    8538 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:43:04.517693    8538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:43:04.521099    8538 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:43:04.524762    8538 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0617 04:43:04.527795    8538 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:43:04.530766    8538 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:43:04.537713    8538 start.go:297] selected driver: qemu2
	I0617 04:43:04.537718    8538 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:43:04.537772    8538 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:43:04.540403    8538 cni.go:84] Creating CNI manager for ""
	I0617 04:43:04.540419    8538 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:43:04.540450    8538 start.go:340] cluster config:
	{Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:43:04.540503    8538 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:43:04.548735    8538 out.go:177] * Starting "stopped-upgrade-767000" primary control-plane node in "stopped-upgrade-767000" cluster
	I0617 04:43:04.552739    8538 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0617 04:43:04.552752    8538 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0617 04:43:04.552756    8538 cache.go:56] Caching tarball of preloaded images
	I0617 04:43:04.552802    8538 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:43:04.552807    8538 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0617 04:43:04.552861    8538 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/config.json ...
	I0617 04:43:04.553317    8538 start.go:360] acquireMachinesLock for stopped-upgrade-767000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:43:04.553345    8538 start.go:364] duration metric: took 21.667µs to acquireMachinesLock for "stopped-upgrade-767000"
	I0617 04:43:04.553352    8538 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:43:04.553357    8538 fix.go:54] fixHost starting: 
	I0617 04:43:04.553456    8538 fix.go:112] recreateIfNeeded on stopped-upgrade-767000: state=Stopped err=<nil>
	W0617 04:43:04.553466    8538 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:43:04.556812    8538 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-767000" ...
	I0617 04:43:07.188757    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:07.189200    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:07.229035    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:07.229173    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:07.251771    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:07.251882    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:07.271936    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:07.272014    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:07.283296    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:07.283365    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:07.293486    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:07.293551    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:07.304063    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:07.304144    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:07.313567    8395 logs.go:276] 0 containers: []
	W0617 04:43:07.313579    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:07.313640    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:07.324084    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:07.324103    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:07.324109    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:07.338045    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:07.338058    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:07.362561    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:07.362574    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:07.367124    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:07.367133    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:07.380897    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:07.380909    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:07.394821    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:07.394840    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:07.406319    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:07.406333    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:07.424690    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:07.424705    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:07.449347    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:07.449357    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:07.486642    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:07.486652    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:07.498171    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:07.498183    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:07.509581    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:07.509594    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:07.524511    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:07.524521    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:07.536646    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:07.536659    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:07.572433    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:07.572445    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:07.591768    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:07.591781    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:07.603315    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:07.603324    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:04.564847    8538 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51472-:22,hostfwd=tcp::51473-:2376,hostname=stopped-upgrade-767000 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/disk.qcow2
	I0617 04:43:04.612893    8538 main.go:141] libmachine: STDOUT: 
	I0617 04:43:04.612915    8538 main.go:141] libmachine: STDERR: 
	I0617 04:43:04.612922    8538 main.go:141] libmachine: Waiting for VM to start (ssh -p 51472 docker@127.0.0.1)...
	I0617 04:43:10.120713    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:15.121929    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:15.122105    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:15.134812    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:15.134902    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:15.148966    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:15.149039    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:15.159524    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:15.159594    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:15.170436    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:15.170512    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:15.181465    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:15.181546    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:15.193824    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:15.193901    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:15.206129    8395 logs.go:276] 0 containers: []
	W0617 04:43:15.206143    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:15.206207    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:15.219254    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:15.219272    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:15.219278    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:15.224541    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:15.224554    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:15.269416    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:15.269434    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:15.286388    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:15.286403    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:15.311534    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:15.311556    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:15.343680    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:15.343695    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:15.372723    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:15.372744    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:15.413833    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:15.413854    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:15.443537    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:15.443558    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:15.464152    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:15.464171    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:15.489980    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:15.489992    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:15.505337    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:15.505353    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:15.524762    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:15.524776    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:15.537106    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:15.537120    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:15.552170    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:15.552182    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:15.563948    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:15.563960    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:15.575777    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:15.575791    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:18.094503    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:23.097165    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:23.097593    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:23.137291    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:23.137427    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:23.158543    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:23.158643    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:23.172848    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:23.172938    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:23.184974    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:23.185049    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:23.200704    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:23.200776    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:23.210984    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:23.211048    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:23.221073    8395 logs.go:276] 0 containers: []
	W0617 04:43:23.221085    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:23.221144    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:23.231485    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:23.231502    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:23.231507    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:23.247260    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:23.247272    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:23.258828    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:23.258839    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:23.263233    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:23.263240    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:23.298937    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:23.298952    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:23.314035    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:23.314047    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:23.328899    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:23.328910    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:23.340633    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:23.340643    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:23.357649    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:23.357659    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:23.381612    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:23.381621    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:25.288288    8538 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/config.json ...
	I0617 04:43:25.288994    8538 machine.go:94] provisionDockerMachine start ...
	I0617 04:43:25.289192    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.289690    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.289704    8538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 04:43:25.381276    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 04:43:25.381307    8538 buildroot.go:166] provisioning hostname "stopped-upgrade-767000"
	I0617 04:43:25.381432    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.381672    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.381689    8538 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-767000 && echo "stopped-upgrade-767000" | sudo tee /etc/hostname
	I0617 04:43:25.459085    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-767000
	
	I0617 04:43:25.459146    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.459271    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.459280    8538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-767000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-767000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-767000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 04:43:25.525208    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 04:43:25.525221    8538 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19087-6045/.minikube CaCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19087-6045/.minikube}
	I0617 04:43:25.525229    8538 buildroot.go:174] setting up certificates
	I0617 04:43:25.525234    8538 provision.go:84] configureAuth start
	I0617 04:43:25.525242    8538 provision.go:143] copyHostCerts
	I0617 04:43:25.525322    8538 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem, removing ...
	I0617 04:43:25.525329    8538 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem
	I0617 04:43:25.525434    8538 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem (1078 bytes)
	I0617 04:43:25.525627    8538 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem, removing ...
	I0617 04:43:25.525631    8538 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem
	I0617 04:43:25.525680    8538 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem (1123 bytes)
	I0617 04:43:25.525817    8538 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem, removing ...
	I0617 04:43:25.525820    8538 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem
	I0617 04:43:25.525900    8538 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem (1679 bytes)
	I0617 04:43:25.526003    8538 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-767000 san=[127.0.0.1 localhost minikube stopped-upgrade-767000]
	I0617 04:43:25.556971    8538 provision.go:177] copyRemoteCerts
	I0617 04:43:25.557004    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 04:43:25.557010    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:43:25.592646    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0617 04:43:25.599455    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0617 04:43:25.605918    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 04:43:25.612976    8538 provision.go:87] duration metric: took 87.729ms to configureAuth
	I0617 04:43:25.612992    8538 buildroot.go:189] setting minikube options for container-runtime
	I0617 04:43:25.613093    8538 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:43:25.613136    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.613229    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.613234    8538 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0617 04:43:25.678679    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0617 04:43:25.678689    8538 buildroot.go:70] root file system type: tmpfs
	I0617 04:43:25.678749    8538 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0617 04:43:25.678801    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.678923    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.678958    8538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0617 04:43:25.747016    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0617 04:43:25.747065    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.747189    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.747200    8538 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0617 04:43:26.085912    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0617 04:43:26.085924    8538 machine.go:97] duration metric: took 796.928125ms to provisionDockerMachine
	I0617 04:43:26.085930    8538 start.go:293] postStartSetup for "stopped-upgrade-767000" (driver="qemu2")
	I0617 04:43:26.085938    8538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 04:43:26.085988    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 04:43:26.085997    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:43:26.120743    8538 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 04:43:26.122130    8538 info.go:137] Remote host: Buildroot 2021.02.12
	I0617 04:43:26.122137    8538 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19087-6045/.minikube/addons for local assets ...
	I0617 04:43:26.122211    8538 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19087-6045/.minikube/files for local assets ...
	I0617 04:43:26.122329    8538 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem -> 65402.pem in /etc/ssl/certs
	I0617 04:43:26.122459    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 04:43:26.125579    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem --> /etc/ssl/certs/65402.pem (1708 bytes)
	I0617 04:43:26.132415    8538 start.go:296] duration metric: took 46.479959ms for postStartSetup
	I0617 04:43:26.132429    8538 fix.go:56] duration metric: took 21.579294s for fixHost
	I0617 04:43:26.132462    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:26.132570    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:26.132574    8538 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0617 04:43:26.198405    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718624606.504962171
	
	I0617 04:43:26.198413    8538 fix.go:216] guest clock: 1718624606.504962171
	I0617 04:43:26.198418    8538 fix.go:229] Guest: 2024-06-17 04:43:26.504962171 -0700 PDT Remote: 2024-06-17 04:43:26.132432 -0700 PDT m=+21.696242043 (delta=372.530171ms)
	I0617 04:43:26.198429    8538 fix.go:200] guest clock delta is within tolerance: 372.530171ms
	I0617 04:43:26.198432    8538 start.go:83] releasing machines lock for "stopped-upgrade-767000", held for 21.645306583s
	I0617 04:43:26.198494    8538 ssh_runner.go:195] Run: cat /version.json
	I0617 04:43:26.198504    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:43:26.198494    8538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 04:43:26.198544    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	W0617 04:43:26.199156    8538 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51472: connect: connection refused
	I0617 04:43:26.199180    8538 retry.go:31] will retry after 303.14945ms: dial tcp [::1]:51472: connect: connection refused
	W0617 04:43:26.231359    8538 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0617 04:43:26.231407    8538 ssh_runner.go:195] Run: systemctl --version
	I0617 04:43:26.233238    8538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 04:43:26.235034    8538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 04:43:26.235061    8538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0617 04:43:26.237959    8538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0617 04:43:26.242648    8538 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 04:43:26.242656    8538 start.go:494] detecting cgroup driver to use...
	I0617 04:43:26.242724    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 04:43:26.249454    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0617 04:43:26.252903    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0617 04:43:26.255737    8538 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0617 04:43:26.255763    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0617 04:43:26.258479    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 04:43:26.261674    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0617 04:43:26.264862    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 04:43:26.267701    8538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 04:43:26.270404    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0617 04:43:26.273674    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0617 04:43:26.277035    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0617 04:43:26.280094    8538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 04:43:26.282709    8538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 04:43:26.285887    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:26.355232    8538 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0617 04:43:26.361050    8538 start.go:494] detecting cgroup driver to use...
	I0617 04:43:26.361129    8538 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0617 04:43:26.366592    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 04:43:26.375454    8538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 04:43:26.381466    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 04:43:26.386085    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 04:43:26.391042    8538 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0617 04:43:26.447253    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 04:43:26.452145    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 04:43:26.457362    8538 ssh_runner.go:195] Run: which cri-dockerd
	I0617 04:43:26.458575    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0617 04:43:26.461276    8538 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0617 04:43:26.466026    8538 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0617 04:43:26.527411    8538 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0617 04:43:26.590926    8538 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0617 04:43:26.590982    8538 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0617 04:43:26.598044    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:26.658406    8538 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0617 04:43:27.799286    8538 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.140870625s)
	I0617 04:43:27.799351    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0617 04:43:27.804065    8538 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0617 04:43:27.809052    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0617 04:43:27.813477    8538 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0617 04:43:27.875557    8538 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0617 04:43:27.935659    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:27.994159    8538 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0617 04:43:28.000396    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0617 04:43:28.004739    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:28.066374    8538 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0617 04:43:28.104621    8538 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0617 04:43:28.104703    8538 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0617 04:43:28.107839    8538 start.go:562] Will wait 60s for crictl version
	I0617 04:43:28.107901    8538 ssh_runner.go:195] Run: which crictl
	I0617 04:43:28.109206    8538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 04:43:28.124384    8538 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0617 04:43:28.124451    8538 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0617 04:43:28.149771    8538 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0617 04:43:23.418995    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:23.419005    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:23.449880    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:23.449890    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:23.461569    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:23.461582    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:23.475443    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:23.475454    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:23.490628    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:23.490640    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:23.505551    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:23.505561    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:23.517870    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:23.517879    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:26.031572    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:28.171020    8538 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0617 04:43:28.171140    8538 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0617 04:43:28.172345    8538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 04:43:28.176055    8538 kubeadm.go:877] updating cluster {Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0617 04:43:28.176109    8538 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0617 04:43:28.176150    8538 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0617 04:43:28.191908    8538 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0617 04:43:28.191917    8538 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0617 04:43:28.191964    8538 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0617 04:43:28.194794    8538 ssh_runner.go:195] Run: which lz4
	I0617 04:43:28.196121    8538 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0617 04:43:28.197332    8538 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 04:43:28.197343    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0617 04:43:28.913124    8538 docker.go:649] duration metric: took 717.046875ms to copy over tarball
	I0617 04:43:28.913187    8538 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 04:43:31.033748    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:31.033903    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:31.049371    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:31.049457    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:31.066629    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:31.066705    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:31.077231    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:31.077301    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:31.087678    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:31.087750    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:31.102627    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:31.102699    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:31.113782    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:31.113852    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:31.124028    8395 logs.go:276] 0 containers: []
	W0617 04:43:31.124040    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:31.124095    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:31.134963    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:31.134980    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:31.134985    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:31.148275    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:31.148290    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:31.159792    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:31.159802    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:31.175024    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:31.175038    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:31.187912    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:31.187924    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:31.207888    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:31.207902    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:31.226658    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:31.226673    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:31.268207    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:31.268220    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:31.273000    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:31.273009    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:31.310234    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:31.310245    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:31.322178    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:31.322190    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:31.346500    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:31.346510    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:31.361079    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:31.361090    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:31.387282    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:31.387293    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:31.401740    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:31.401751    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:31.413594    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:31.413608    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:31.432855    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:31.432866    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:30.075857    8538 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162667209s)
	I0617 04:43:30.075872    8538 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 04:43:30.091592    8538 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0617 04:43:30.094513    8538 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0617 04:43:30.099514    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:30.158323    8538 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0617 04:43:31.815446    8538 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.657124166s)
	I0617 04:43:31.815536    8538 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0617 04:43:31.826586    8538 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0617 04:43:31.826596    8538 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0617 04:43:31.826601    8538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 04:43:31.833249    8538 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:31.833268    8538 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:31.833283    8538 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:31.833329    8538 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:31.833349    8538 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:31.833381    8538 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0617 04:43:31.833473    8538 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:31.833512    8538 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:31.841272    8538 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:31.841337    8538 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0617 04:43:31.841391    8538 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:31.841484    8538 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:31.841666    8538 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:31.841561    8538 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:31.841656    8538 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:31.841873    8538 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.734405    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:32.750426    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:32.767766    8538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0617 04:43:32.767810    8538 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:32.767903    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:32.777615    8538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0617 04:43:32.777645    8538 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:32.777718    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:32.782218    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0617 04:43:32.785510    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0617 04:43:32.795112    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:32.798928    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0617 04:43:32.808161    8538 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0617 04:43:32.808184    8538 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0617 04:43:32.808246    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0617 04:43:32.818192    8538 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0617 04:43:32.818212    8538 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:32.818266    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:32.824026    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0617 04:43:32.824160    8538 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0617 04:43:32.831016    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0617 04:43:32.831030    8538 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0617 04:43:32.831045    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0617 04:43:32.837313    8538 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0617 04:43:32.837426    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:32.838839    8538 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0617 04:43:32.838847    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0617 04:43:32.852991    8538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0617 04:43:32.853013    8538 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:32.853068    8538 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:32.875940    8538 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0617 04:43:32.875994    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 04:43:32.876091    8538 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0617 04:43:32.877433    8538 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0617 04:43:32.877448    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0617 04:43:32.878630    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0617 04:43:32.886574    8538 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0617 04:43:32.886684    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.894687    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:32.907628    8538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0617 04:43:32.907653    8538 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:32.907701    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:32.910272    8538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0617 04:43:32.910287    8538 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.910322    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.914201    8538 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 04:43:32.914213    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0617 04:43:32.928050    8538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0617 04:43:32.928071    8538 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:32.928125    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:32.934429    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0617 04:43:32.934466    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0617 04:43:32.934576    8538 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0617 04:43:33.170209    8538 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 04:43:33.170253    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0617 04:43:33.170266    8538 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0617 04:43:33.170284    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0617 04:43:33.205607    8538 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0617 04:43:33.205621    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0617 04:43:33.249774    8538 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0617 04:43:33.249810    8538 cache_images.go:92] duration metric: took 1.4231945s to LoadCachedImages
	W0617 04:43:33.249854    8538 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0617 04:43:33.249859    8538 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0617 04:43:33.249911    8538 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-767000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 04:43:33.249975    8538 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0617 04:43:33.263731    8538 cni.go:84] Creating CNI manager for ""
	I0617 04:43:33.263742    8538 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:43:33.263747    8538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 04:43:33.263755    8538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-767000 NodeName:stopped-upgrade-767000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 04:43:33.263819    8538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-767000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 04:43:33.263876    8538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0617 04:43:33.266672    8538 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 04:43:33.266696    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 04:43:33.269699    8538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0617 04:43:33.274758    8538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 04:43:33.279549    8538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0617 04:43:33.284735    8538 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0617 04:43:33.285948    8538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 04:43:33.289842    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:33.354694    8538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 04:43:33.360309    8538 certs.go:68] Setting up /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000 for IP: 10.0.2.15
	I0617 04:43:33.360316    8538 certs.go:194] generating shared ca certs ...
	I0617 04:43:33.360325    8538 certs.go:226] acquiring lock for ca certs: {Name:mk71e2ea16ce0c468e7dfee6f005765117fbc8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.360494    8538 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.key
	I0617 04:43:33.360543    8538 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.key
	I0617 04:43:33.360549    8538 certs.go:256] generating profile certs ...
	I0617 04:43:33.360620    8538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.key
	I0617 04:43:33.360636    8538 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98
	I0617 04:43:33.360647    8538 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0617 04:43:33.486940    8538 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98 ...
	I0617 04:43:33.486957    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98: {Name:mk7db01f0a717421f7581ec76fcbdd8064ed6750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.487390    8538 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98 ...
	I0617 04:43:33.487409    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98: {Name:mkffc6d40f94ec0c1441a6a597a6004138fbbc94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.487563    8538 certs.go:381] copying /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt
	I0617 04:43:33.487690    8538 certs.go:385] copying /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key
	I0617 04:43:33.487847    8538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/proxy-client.key
	I0617 04:43:33.487996    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540.pem (1338 bytes)
	W0617 04:43:33.488024    8538 certs.go:480] ignoring /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540_empty.pem, impossibly tiny 0 bytes
	I0617 04:43:33.488029    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 04:43:33.488047    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem (1078 bytes)
	I0617 04:43:33.488078    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem (1123 bytes)
	I0617 04:43:33.488095    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem (1679 bytes)
	I0617 04:43:33.488132    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem (1708 bytes)
	I0617 04:43:33.488899    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 04:43:33.496772    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0617 04:43:33.503849    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 04:43:33.510351    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0617 04:43:33.517811    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 04:43:33.524478    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 04:43:33.531264    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 04:43:33.537951    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 04:43:33.545191    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 04:43:33.551587    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540.pem --> /usr/share/ca-certificates/6540.pem (1338 bytes)
	I0617 04:43:33.557991    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem --> /usr/share/ca-certificates/65402.pem (1708 bytes)
	I0617 04:43:33.564882    8538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 04:43:33.569874    8538 ssh_runner.go:195] Run: openssl version
	I0617 04:43:33.571831    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65402.pem && ln -fs /usr/share/ca-certificates/65402.pem /etc/ssl/certs/65402.pem"
	I0617 04:43:33.574640    8538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65402.pem
	I0617 04:43:33.575960    8538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 11:27 /usr/share/ca-certificates/65402.pem
	I0617 04:43:33.575979    8538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65402.pem
	I0617 04:43:33.577604    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65402.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 04:43:33.580821    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 04:43:33.583699    8538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:43:33.584943    8538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:43:33.584962    8538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:43:33.586754    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 04:43:33.589926    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6540.pem && ln -fs /usr/share/ca-certificates/6540.pem /etc/ssl/certs/6540.pem"
	I0617 04:43:33.593412    8538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6540.pem
	I0617 04:43:33.594794    8538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 11:27 /usr/share/ca-certificates/6540.pem
	I0617 04:43:33.594813    8538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6540.pem
	I0617 04:43:33.596529    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6540.pem /etc/ssl/certs/51391683.0"
	I0617 04:43:33.599477    8538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 04:43:33.600836    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 04:43:33.603491    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 04:43:33.605326    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 04:43:33.607229    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 04:43:33.608833    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 04:43:33.610476    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 04:43:33.612351    8538 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:43:33.612416    8538 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0617 04:43:33.622800    8538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 04:43:33.625885    8538 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 04:43:33.625892    8538 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 04:43:33.625895    8538 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 04:43:33.625919    8538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 04:43:33.629231    8538 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 04:43:33.629541    8538 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-767000" does not appear in /Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:43:33.629636    8538 kubeconfig.go:62] /Users/jenkins/minikube-integration/19087-6045/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-767000" cluster setting kubeconfig missing "stopped-upgrade-767000" context setting]
	I0617 04:43:33.629822    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/kubeconfig: {Name:mk50fd79b579920a7f11ac34f212a8491ceefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.630241    8538 kapi.go:59] client config for stopped-upgrade-767000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.key", CAFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025a0460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 04:43:33.630563    8538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 04:43:33.633395    8538 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-767000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0617 04:43:33.633401    8538 kubeadm.go:1154] stopping kube-system containers ...
	I0617 04:43:33.633436    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0617 04:43:33.644174    8538 docker.go:483] Stopping containers: [28331efdc258 f5446e1c7e66 388707f1fcc0 293474b3258b c6ee7db29f8d 7a79dd7078e6 4817d393fb9b 853a9dce7b50]
	I0617 04:43:33.644231    8538 ssh_runner.go:195] Run: docker stop 28331efdc258 f5446e1c7e66 388707f1fcc0 293474b3258b c6ee7db29f8d 7a79dd7078e6 4817d393fb9b 853a9dce7b50
	I0617 04:43:33.655003    8538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 04:43:33.660677    8538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 04:43:33.663266    8538 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 04:43:33.663272    8538 kubeadm.go:156] found existing configuration files:
	
	I0617 04:43:33.663295    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf
	I0617 04:43:33.665885    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 04:43:33.665910    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 04:43:33.669051    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf
	I0617 04:43:33.671572    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 04:43:33.671592    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 04:43:33.674319    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf
	I0617 04:43:33.677312    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 04:43:33.677353    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 04:43:33.679961    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf
	I0617 04:43:33.682266    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 04:43:33.682287    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 04:43:33.684984    8538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 04:43:33.687511    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:33.708836    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:34.425622    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:33.954578    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:34.543052    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:34.563256    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:34.588179    8538 api_server.go:52] waiting for apiserver process to appear ...
	I0617 04:43:34.588267    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:43:35.090436    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:43:35.590326    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:43:35.594415    8538 api_server.go:72] duration metric: took 1.006249708s to wait for apiserver process to appear ...
	I0617 04:43:35.594425    8538 api_server.go:88] waiting for apiserver healthz status ...
	I0617 04:43:35.594433    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:38.956839    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:38.957257    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:38.997183    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:38.997325    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:39.019343    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:39.019464    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:39.034836    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:39.034909    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:39.047788    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:39.047864    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:39.058829    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:39.058891    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:39.070081    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:39.070141    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:39.080750    8395 logs.go:276] 0 containers: []
	W0617 04:43:39.080763    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:39.080820    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:39.091078    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:39.091094    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:39.091100    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:39.095581    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:39.095590    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:39.109615    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:39.109624    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:39.124185    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:39.124202    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:39.139295    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:39.139307    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:39.162581    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:39.162588    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:39.199637    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:39.199645    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:39.233704    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:39.233719    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:39.248982    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:39.248994    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:39.260305    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:39.260318    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:39.275128    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:39.275139    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:39.286848    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:39.286860    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:39.298821    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:39.298832    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:39.324771    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:39.324783    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:39.338618    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:39.338626    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:39.350274    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:39.350285    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:39.369922    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:39.369933    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:41.883317    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:40.596205    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:40.596257    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:46.885979    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:46.886187    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:46.912878    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:46.912992    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:46.931247    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:46.931336    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:46.944310    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:46.944383    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:46.955734    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:46.955799    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:46.966128    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:46.966194    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:46.976482    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:46.976543    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:46.992604    8395 logs.go:276] 0 containers: []
	W0617 04:43:46.992615    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:46.992672    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:47.002835    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:47.002853    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:47.002859    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:47.017692    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:47.017704    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:47.031156    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:47.031168    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:47.044139    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:47.044149    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:47.079540    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:47.079550    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:47.091205    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:47.091216    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:47.106271    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:47.106284    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:47.129436    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:47.129446    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:47.154160    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:47.154169    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:47.175076    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:47.175090    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:47.196307    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:47.196317    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:47.207269    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:47.207280    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:47.224808    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:47.224821    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:47.239263    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:47.239273    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:47.276299    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:47.276308    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:47.280328    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:47.280333    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:47.291728    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:47.291738    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:45.596566    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:45.596620    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:49.805891    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:50.597381    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:50.597427    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:54.808161    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:54.808527    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:43:54.846974    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:43:54.847102    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:43:54.871889    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:43:54.871994    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:43:54.886232    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:43:54.886304    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:43:54.898101    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:43:54.898170    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:43:54.908465    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:43:54.908539    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:43:54.922041    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:43:54.922113    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:43:54.932091    8395 logs.go:276] 0 containers: []
	W0617 04:43:54.932104    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:43:54.932151    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:43:54.942657    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:43:54.942674    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:43:54.942680    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:43:54.982884    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:43:54.982896    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:43:54.996234    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:43:54.996247    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:43:55.012284    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:43:55.012299    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:43:55.025066    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:43:55.025078    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:43:55.040315    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:43:55.040327    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:43:55.071227    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:43:55.071241    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:43:55.087055    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:43:55.087067    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:43:55.103368    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:43:55.103381    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:43:55.119245    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:43:55.119256    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:43:55.131744    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:43:55.131757    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:43:55.136401    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:43:55.136409    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:43:55.156072    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:43:55.156090    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:43:55.194217    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:43:55.194239    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:43:55.208864    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:43:55.208877    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:43:55.220873    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:43:55.220883    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:43:55.231862    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:43:55.231875    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:43:57.756353    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:55.598024    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:55.598071    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:02.758968    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:02.759175    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:02.785027    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:44:02.785150    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:02.803528    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:44:02.803609    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:02.819927    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:44:02.819999    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:02.831375    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:44:02.831442    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:02.842072    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:44:02.842143    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:02.852694    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:44:02.852753    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:02.863198    8395 logs.go:276] 0 containers: []
	W0617 04:44:02.863208    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:02.863254    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:02.873427    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:44:02.873448    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:44:02.873454    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:44:02.898337    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:44:02.898347    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:44:02.912202    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:44:02.912214    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:44:02.929635    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:44:02.929649    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:44:02.943768    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:02.943779    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:02.981438    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:02.981447    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:02.985462    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:44:02.985470    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:44:02.997023    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:44:02.997034    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:44:03.012595    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:44:03.012608    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:44:03.030101    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:44:03.030112    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:44:03.044106    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:44:03.044118    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:44:03.062530    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:44:03.062543    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:44:03.077348    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:03.077357    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:03.110877    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:44:03.110894    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:44:03.122122    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:44:03.122132    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:44:03.134028    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:03.134042    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:03.155979    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:44:03.155985    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:00.598957    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:00.599042    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:05.675328    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:05.600956    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:05.601010    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:10.677766    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:10.677874    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:10.690609    8395 logs.go:276] 2 containers: [bc463d808817 0d3125fffc84]
	I0617 04:44:10.690684    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:10.701661    8395 logs.go:276] 2 containers: [cef9c5b669dd ac3f9b0c979d]
	I0617 04:44:10.701732    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:10.712267    8395 logs.go:276] 1 containers: [c2025c56f5e1]
	I0617 04:44:10.712336    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:10.724110    8395 logs.go:276] 2 containers: [2056fb6f1b37 7f9b0db25449]
	I0617 04:44:10.724185    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:10.734361    8395 logs.go:276] 1 containers: [da53a64a57f4]
	I0617 04:44:10.734426    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:10.745326    8395 logs.go:276] 2 containers: [f61f42ce415d 4b8a1fb876c6]
	I0617 04:44:10.745395    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:10.755703    8395 logs.go:276] 0 containers: []
	W0617 04:44:10.755719    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:10.755775    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:10.767721    8395 logs.go:276] 2 containers: [fc07ca4691e6 ef05c5d28ee9]
	I0617 04:44:10.767739    8395 logs.go:123] Gathering logs for coredns [c2025c56f5e1] ...
	I0617 04:44:10.767744    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2025c56f5e1"
	I0617 04:44:10.778689    8395 logs.go:123] Gathering logs for kube-controller-manager [4b8a1fb876c6] ...
	I0617 04:44:10.778702    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8a1fb876c6"
	I0617 04:44:10.793027    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:10.793038    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:10.833847    8395 logs.go:123] Gathering logs for etcd [ac3f9b0c979d] ...
	I0617 04:44:10.833858    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3f9b0c979d"
	I0617 04:44:10.851293    8395 logs.go:123] Gathering logs for kube-scheduler [2056fb6f1b37] ...
	I0617 04:44:10.851301    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056fb6f1b37"
	I0617 04:44:10.862525    8395 logs.go:123] Gathering logs for storage-provisioner [ef05c5d28ee9] ...
	I0617 04:44:10.862537    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef05c5d28ee9"
	I0617 04:44:10.874122    8395 logs.go:123] Gathering logs for etcd [cef9c5b669dd] ...
	I0617 04:44:10.874138    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef9c5b669dd"
	I0617 04:44:10.887801    8395 logs.go:123] Gathering logs for kube-scheduler [7f9b0db25449] ...
	I0617 04:44:10.887810    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f9b0db25449"
	I0617 04:44:10.902748    8395 logs.go:123] Gathering logs for kube-proxy [da53a64a57f4] ...
	I0617 04:44:10.902765    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da53a64a57f4"
	I0617 04:44:10.914755    8395 logs.go:123] Gathering logs for kube-controller-manager [f61f42ce415d] ...
	I0617 04:44:10.914765    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f61f42ce415d"
	I0617 04:44:10.933030    8395 logs.go:123] Gathering logs for storage-provisioner [fc07ca4691e6] ...
	I0617 04:44:10.933041    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc07ca4691e6"
	I0617 04:44:10.944902    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:10.944911    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:10.967147    8395 logs.go:123] Gathering logs for kube-apiserver [0d3125fffc84] ...
	I0617 04:44:10.967156    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3125fffc84"
	I0617 04:44:10.993160    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:10.993170    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:11.050281    8395 logs.go:123] Gathering logs for kube-apiserver [bc463d808817] ...
	I0617 04:44:11.050292    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc463d808817"
	I0617 04:44:11.063530    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:44:11.063540    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:11.076883    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:11.076893    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:10.602647    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:10.602675    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:13.583518    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:15.604509    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:15.604546    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:18.584713    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:18.584757    8395 kubeadm.go:591] duration metric: took 4m4.28340175s to restartPrimaryControlPlane
	W0617 04:44:18.584791    8395 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 04:44:18.584807    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0617 04:44:19.589955    8395 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005148292s)
	I0617 04:44:19.590023    8395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 04:44:19.594853    8395 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 04:44:19.597471    8395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 04:44:19.600560    8395 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 04:44:19.600566    8395 kubeadm.go:156] found existing configuration files:
	
	I0617 04:44:19.600591    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/admin.conf
	I0617 04:44:19.603687    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 04:44:19.603711    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 04:44:19.606376    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/kubelet.conf
	I0617 04:44:19.608725    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 04:44:19.608750    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 04:44:19.611615    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/controller-manager.conf
	I0617 04:44:19.614324    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 04:44:19.614344    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 04:44:19.616811    8395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/scheduler.conf
	I0617 04:44:19.619653    8395 kubeadm.go:162] "https://control-plane.minikube.internal:51289" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51289 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 04:44:19.619674    8395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 04:44:19.622137    8395 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 04:44:19.638454    8395 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0617 04:44:19.638483    8395 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 04:44:19.685224    8395 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 04:44:19.685280    8395 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 04:44:19.685329    8395 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 04:44:19.739501    8395 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 04:44:19.746452    8395 out.go:204]   - Generating certificates and keys ...
	I0617 04:44:19.746484    8395 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 04:44:19.746515    8395 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 04:44:19.746550    8395 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 04:44:19.746582    8395 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 04:44:19.746622    8395 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 04:44:19.746656    8395 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 04:44:19.746692    8395 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 04:44:19.746727    8395 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 04:44:19.746764    8395 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 04:44:19.746804    8395 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 04:44:19.746821    8395 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 04:44:19.746854    8395 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 04:44:19.905499    8395 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 04:44:20.058851    8395 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 04:44:20.242351    8395 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 04:44:20.286744    8395 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 04:44:20.317756    8395 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 04:44:20.318143    8395 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 04:44:20.318200    8395 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 04:44:20.397246    8395 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 04:44:20.405353    8395 out.go:204]   - Booting up control plane ...
	I0617 04:44:20.405454    8395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 04:44:20.405498    8395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 04:44:20.405531    8395 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 04:44:20.405705    8395 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 04:44:20.405822    8395 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 04:44:20.606774    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:20.606803    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:25.411874    8395 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.010209 seconds
	I0617 04:44:25.411958    8395 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 04:44:25.417398    8395 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 04:44:25.925514    8395 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 04:44:25.925617    8395 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-857000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 04:44:26.429050    8395 kubeadm.go:309] [bootstrap-token] Using token: yu7u84.ui5p86jwwxs7u8th
	I0617 04:44:26.435714    8395 out.go:204]   - Configuring RBAC rules ...
	I0617 04:44:26.435780    8395 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 04:44:26.435830    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 04:44:26.441340    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 04:44:26.442231    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 04:44:26.443092    8395 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 04:44:26.443882    8395 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 04:44:26.447073    8395 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 04:44:26.627748    8395 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 04:44:26.832970    8395 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 04:44:26.833531    8395 kubeadm.go:309] 
	I0617 04:44:26.833561    8395 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 04:44:26.833565    8395 kubeadm.go:309] 
	I0617 04:44:26.833613    8395 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 04:44:26.833616    8395 kubeadm.go:309] 
	I0617 04:44:26.833640    8395 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 04:44:26.833671    8395 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 04:44:26.833705    8395 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 04:44:26.833708    8395 kubeadm.go:309] 
	I0617 04:44:26.833731    8395 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 04:44:26.833754    8395 kubeadm.go:309] 
	I0617 04:44:26.833776    8395 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 04:44:26.833778    8395 kubeadm.go:309] 
	I0617 04:44:26.833811    8395 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 04:44:26.833850    8395 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 04:44:26.833893    8395 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 04:44:26.833896    8395 kubeadm.go:309] 
	I0617 04:44:26.833933    8395 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 04:44:26.833972    8395 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 04:44:26.833976    8395 kubeadm.go:309] 
	I0617 04:44:26.834018    8395 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yu7u84.ui5p86jwwxs7u8th \
	I0617 04:44:26.834075    8395 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb \
	I0617 04:44:26.834088    8395 kubeadm.go:309] 	--control-plane 
	I0617 04:44:26.834091    8395 kubeadm.go:309] 
	I0617 04:44:26.834138    8395 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 04:44:26.834145    8395 kubeadm.go:309] 
	I0617 04:44:26.834184    8395 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yu7u84.ui5p86jwwxs7u8th \
	I0617 04:44:26.834250    8395 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb 
	I0617 04:44:26.834310    8395 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 04:44:26.834319    8395 cni.go:84] Creating CNI manager for ""
	I0617 04:44:26.834327    8395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:44:26.837393    8395 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 04:44:26.843370    8395 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 04:44:26.846238    8395 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 04:44:26.851076    8395 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 04:44:26.851131    8395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 04:44:26.851139    8395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-857000 minikube.k8s.io/updated_at=2024_06_17T04_44_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=84fc08e1aa3123a23ee19b25404b578b39fd2f91 minikube.k8s.io/name=running-upgrade-857000 minikube.k8s.io/primary=true
	I0617 04:44:26.894197    8395 ops.go:34] apiserver oom_adj: -16
	I0617 04:44:26.894197    8395 kubeadm.go:1107] duration metric: took 43.098ms to wait for elevateKubeSystemPrivileges
	W0617 04:44:26.894241    8395 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 04:44:26.894248    8395 kubeadm.go:393] duration metric: took 4m12.607426s to StartCluster
	I0617 04:44:26.894258    8395 settings.go:142] acquiring lock: {Name:mkdf59d9cf591c81341c913869983ffa33afef47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:44:26.894443    8395 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:44:26.894808    8395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/kubeconfig: {Name:mk50fd79b579920a7f11ac34f212a8491ceefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:44:26.895026    8395 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:44:26.899365    8395 out.go:177] * Verifying Kubernetes components...
	I0617 04:44:26.895038    8395 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 04:44:26.895092    8395 config.go:182] Loaded profile config "running-upgrade-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:44:26.907184    8395 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-857000"
	I0617 04:44:26.907187    8395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:44:26.907199    8395 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-857000"
	I0617 04:44:26.907203    8395 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-857000"
	W0617 04:44:26.907205    8395 addons.go:243] addon storage-provisioner should already be in state true
	I0617 04:44:26.907213    8395 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-857000"
	I0617 04:44:26.907217    8395 host.go:66] Checking if "running-upgrade-857000" exists ...
	I0617 04:44:26.908388    8395 kapi.go:59] client config for running-upgrade-857000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/running-upgrade-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104280460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 04:44:26.908508    8395 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-857000"
	W0617 04:44:26.908514    8395 addons.go:243] addon default-storageclass should already be in state true
	I0617 04:44:26.908523    8395 host.go:66] Checking if "running-upgrade-857000" exists ...
	I0617 04:44:26.913284    8395 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:44:26.916393    8395 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:44:26.916402    8395 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 04:44:26.916411    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	I0617 04:44:26.917148    8395 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 04:44:26.917152    8395 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 04:44:26.917156    8395 sshutil.go:53] new ssh client: &{IP:localhost Port:51257 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/running-upgrade-857000/id_rsa Username:docker}
	I0617 04:44:27.001576    8395 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 04:44:27.006630    8395 api_server.go:52] waiting for apiserver process to appear ...
	I0617 04:44:27.006671    8395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:44:27.010476    8395 api_server.go:72] duration metric: took 115.440041ms to wait for apiserver process to appear ...
	I0617 04:44:27.010484    8395 api_server.go:88] waiting for apiserver healthz status ...
	I0617 04:44:27.010491    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:27.037479    8395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 04:44:27.045271    8395 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:44:25.608940    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:25.608960    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:32.012634    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:32.012678    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:30.611086    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:30.611105    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:37.012981    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:37.013010    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:35.613265    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:35.613478    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:35.634851    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:35.634936    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:35.648441    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:35.648516    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:35.660524    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:35.660595    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:35.671074    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:35.671159    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:35.687894    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:35.687959    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:35.698712    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:35.698777    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:35.708399    8538 logs.go:276] 0 containers: []
	W0617 04:44:35.708411    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:35.708472    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:35.718947    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:35.718965    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:35.718973    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:35.734035    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:35.734050    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:35.746755    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:35.746765    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:35.757569    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:35.757579    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:35.769564    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:35.769582    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:35.774300    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:35.774306    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:35.786099    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:35.786112    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:35.811295    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:35.811305    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:35.827339    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:35.827349    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:35.844602    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:35.844616    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:35.861761    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:35.861771    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:35.963549    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:35.963563    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:35.976350    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:35.976360    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:36.004971    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:36.004989    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:36.021736    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:36.021748    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:36.036073    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:36.036086    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:38.563857    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:42.013398    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:42.013442    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:43.564290    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:43.564563    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:43.587689    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:43.587812    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:43.610139    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:43.610212    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:43.622696    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:43.622762    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:43.633416    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:43.633500    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:43.644067    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:43.644138    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:43.654896    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:43.654965    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:43.665250    8538 logs.go:276] 0 containers: []
	W0617 04:44:43.665262    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:43.665322    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:43.675861    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:43.675880    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:43.675885    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:43.697632    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:43.697646    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:43.713310    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:43.713324    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:43.730476    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:43.730487    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:43.747788    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:43.747799    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:43.758854    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:43.758864    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:43.785103    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:43.785115    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:43.796512    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:43.796523    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:43.825202    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:43.825211    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:43.839153    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:43.839163    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:43.876898    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:43.876909    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:43.889430    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:43.889439    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:43.902927    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:43.902936    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:43.917229    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:43.917241    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:43.928515    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:43.928525    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:43.941059    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:43.941069    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:47.014062    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:47.014099    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:46.446568    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:52.014742    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:52.014792    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:51.448018    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:51.448247    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:51.467932    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:51.468026    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:51.479152    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:51.479223    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:51.489924    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:51.489992    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:51.500851    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:51.500924    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:51.511386    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:51.511442    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:51.521775    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:51.521850    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:51.532094    8538 logs.go:276] 0 containers: []
	W0617 04:44:51.532107    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:51.532167    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:51.542229    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:51.542245    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:51.542250    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:51.570483    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:51.570493    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:51.605458    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:51.605471    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:51.619403    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:51.619413    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:51.631804    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:51.631818    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:51.646246    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:51.646259    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:51.661633    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:51.661645    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:51.675720    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:51.675735    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:51.701467    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:51.701476    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:51.715562    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:51.715573    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:51.737245    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:51.737259    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:51.754401    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:51.754412    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:51.766035    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:51.766048    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:51.770660    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:51.770665    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:51.785693    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:51.785706    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:51.797254    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:51.797265    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:54.322670    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:57.015848    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:57.015907    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0617 04:44:57.396608    8395 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0617 04:44:57.405937    8395 out.go:177] * Enabled addons: storage-provisioner
	I0617 04:44:57.414072    8395 addons.go:510] duration metric: took 30.5193445s for enable addons: enabled=[storage-provisioner]
	I0617 04:44:59.325006    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:59.325119    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:59.336685    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:59.336752    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:59.347203    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:59.347274    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:59.357779    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:59.357847    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:59.368533    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:59.368600    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:59.378922    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:59.378994    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:59.389211    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:59.389276    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:59.401520    8538 logs.go:276] 0 containers: []
	W0617 04:44:59.401532    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:59.401586    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:59.411515    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:59.411538    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:59.411544    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:59.425594    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:59.425607    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:59.439660    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:59.439671    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:59.461123    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:59.461135    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:02.017193    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:02.017229    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:59.490031    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:59.490042    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:59.505453    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:59.505466    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:59.519287    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:59.519299    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:59.530617    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:59.530628    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:59.541956    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:59.541968    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:59.553921    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:59.553935    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:59.558123    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:59.558133    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:59.592794    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:59.592808    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:59.608765    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:59.608778    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:59.620297    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:59.620308    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:59.637115    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:59.637125    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:59.655084    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:59.655096    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:02.183362    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:07.019256    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:07.019311    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:07.185606    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:07.185927    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:07.222261    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:07.222401    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:07.243720    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:07.243829    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:07.258048    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:07.258136    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:07.273302    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:07.273371    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:07.284026    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:07.284084    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:07.294801    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:07.294870    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:07.307522    8538 logs.go:276] 0 containers: []
	W0617 04:45:07.307533    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:07.307590    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:07.318303    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:07.318321    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:07.318326    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:07.322820    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:07.322826    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:07.378671    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:07.378688    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:07.404037    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:07.404051    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:07.419823    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:07.419834    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:07.431538    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:07.431548    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:07.442403    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:07.442415    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:07.455807    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:07.455818    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:07.470418    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:07.470430    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:07.482046    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:07.482056    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:07.504270    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:07.504284    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:07.515943    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:07.515959    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:07.541116    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:07.541124    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:07.568691    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:07.568703    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:07.582457    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:07.582470    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:07.601982    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:07.601995    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:07.623686    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:07.623696    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:12.021425    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:12.021461    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:10.138108    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:17.023629    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:17.023659    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:15.140369    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:15.140516    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:15.152898    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:15.152968    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:15.163402    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:15.163471    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:15.174940    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:15.175007    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:15.185676    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:15.185743    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:15.196352    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:15.196418    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:15.207000    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:15.207074    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:15.217614    8538 logs.go:276] 0 containers: []
	W0617 04:45:15.217633    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:15.217688    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:15.232902    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:15.232919    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:15.232924    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:15.260523    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:15.260531    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:15.272834    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:15.272845    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:15.285892    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:15.285903    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:15.300158    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:15.300170    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:15.311691    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:15.311703    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:15.328554    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:15.328564    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:15.339705    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:15.339714    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:15.363237    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:15.363245    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:15.367065    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:15.367074    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:15.402066    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:15.402080    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:15.416831    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:15.416840    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:15.430852    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:15.430861    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:15.442123    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:15.442137    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:15.463265    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:15.463275    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:15.478701    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:15.478715    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:15.496010    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:15.496021    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:18.009027    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:22.025119    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:22.025193    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:23.011302    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:23.011436    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:23.030136    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:23.030232    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:23.044598    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:23.044671    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:23.056429    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:23.056500    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:23.067125    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:23.067193    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:23.077542    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:23.077612    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:23.088214    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:23.088287    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:23.098806    8538 logs.go:276] 0 containers: []
	W0617 04:45:23.098818    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:23.098876    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:23.109292    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:23.109312    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:23.109319    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:23.143671    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:23.143686    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:23.164695    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:23.164709    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:23.182483    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:23.182493    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:23.200020    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:23.200033    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:23.210790    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:23.210801    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:23.226425    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:23.226437    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:23.244000    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:23.244010    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:23.255768    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:23.255781    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:23.272626    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:23.272640    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:23.301616    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:23.301628    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:23.305904    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:23.305911    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:23.346637    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:23.346656    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:23.363076    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:23.363088    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:23.387314    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:23.387323    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:23.398981    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:23.398995    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:23.409629    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:23.409640    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:27.026950    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:27.027044    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:27.038179    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:27.038253    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:27.048411    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:27.048472    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:27.058715    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:27.058787    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:27.068937    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:27.069002    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:27.079346    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:27.079416    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:27.090366    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:27.090437    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:27.100620    8395 logs.go:276] 0 containers: []
	W0617 04:45:27.100632    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:27.100689    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:27.110878    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:27.110892    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:27.110898    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:27.122170    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:27.122183    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:27.136140    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:27.136153    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:27.151224    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:27.151237    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:27.168529    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:27.168540    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:27.188114    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:27.188138    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:27.199950    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:27.199960    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:27.211714    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:27.211723    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:27.223591    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:27.223603    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:27.246801    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:27.246809    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:27.282293    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:27.282300    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:27.286399    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:27.286406    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:27.321130    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:27.321143    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:25.923354    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:29.837152    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:30.925594    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:30.925733    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:30.943506    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:30.943577    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:30.955378    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:30.955446    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:30.966273    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:30.966335    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:30.976364    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:30.976427    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:30.986289    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:30.986380    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:30.996658    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:30.996724    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:31.007418    8538 logs.go:276] 0 containers: []
	W0617 04:45:31.007429    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:31.007487    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:31.018039    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:31.018057    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:31.018063    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:31.030893    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:31.030905    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:31.035168    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:31.035174    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:31.048983    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:31.048994    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:31.063088    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:31.063098    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:31.076923    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:31.076934    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:31.096683    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:31.096694    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:31.114239    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:31.114251    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:31.139095    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:31.139106    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:31.167137    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:31.167147    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:31.178627    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:31.178638    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:31.214740    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:31.214750    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:31.235920    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:31.235930    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:31.247785    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:31.247798    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:31.268227    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:31.268237    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:31.279937    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:31.279949    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:31.291296    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:31.291309    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:33.806371    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:34.839480    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:34.839641    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:34.857132    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:34.857215    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:34.871068    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:34.871136    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:34.882319    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:34.882387    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:34.892948    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:34.893012    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:34.903558    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:34.903624    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:34.914117    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:34.914186    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:34.924723    8395 logs.go:276] 0 containers: []
	W0617 04:45:34.924735    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:34.924789    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:34.935575    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:34.935591    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:34.935597    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:34.940201    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:34.940209    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:34.956562    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:34.956576    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:34.968690    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:34.968704    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:34.987354    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:34.987365    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:34.998952    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:34.998963    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:35.023506    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:35.023517    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:35.062701    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:35.062720    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:35.098833    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:35.098849    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:35.113598    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:35.113612    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:35.124784    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:35.124796    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:35.139314    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:35.139324    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:35.154906    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:35.154917    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:37.668346    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:38.808708    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:38.808871    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:38.822324    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:38.822407    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:38.834656    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:38.834725    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:38.844895    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:38.844955    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:38.855289    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:38.855365    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:38.866024    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:38.866085    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:38.876347    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:38.876404    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:38.886673    8538 logs.go:276] 0 containers: []
	W0617 04:45:38.886685    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:38.886746    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:38.902401    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:38.902418    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:38.902424    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:38.913915    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:38.913927    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:38.928145    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:38.928158    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:38.945271    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:38.945282    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:38.957178    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:38.957188    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:38.975265    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:38.975275    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:39.012153    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:39.012166    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:39.030243    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:39.030260    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:39.059802    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:39.059828    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:39.087622    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:39.087637    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:39.091780    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:39.091787    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:39.110437    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:39.110448    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:39.123226    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:39.123236    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:39.138956    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:39.138970    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:39.153328    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:39.153343    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:39.176509    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:39.176522    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:39.188109    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:39.188121    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:42.670649    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:42.670857    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:42.696193    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:42.696278    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:42.711068    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:42.711136    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:42.729619    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:42.729689    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:42.740436    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:42.740500    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:42.752786    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:42.752859    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:42.763012    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:42.763081    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:42.773092    8395 logs.go:276] 0 containers: []
	W0617 04:45:42.773103    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:42.773151    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:42.783613    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:42.783628    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:42.783634    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:42.795234    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:42.795244    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:42.799693    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:42.799700    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:42.833952    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:42.833964    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:42.849289    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:42.849303    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:42.863337    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:42.863350    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:42.877331    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:42.877341    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:42.889316    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:42.889330    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:42.925523    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:42.925531    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:42.937332    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:42.937343    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:42.953013    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:42.953027    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:42.970938    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:42.970951    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:42.982357    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:42.982370    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:41.701254    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:45.507724    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:46.703545    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:46.703657    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:46.721148    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:46.721226    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:46.733671    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:46.733736    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:46.743592    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:46.743686    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:46.754224    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:46.754292    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:46.764181    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:46.764245    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:46.774614    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:46.774682    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:46.785130    8538 logs.go:276] 0 containers: []
	W0617 04:45:46.785145    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:46.785202    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:46.796384    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:46.796408    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:46.796414    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:46.812265    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:46.812279    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:46.837526    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:46.837534    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:46.851749    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:46.851764    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:46.870272    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:46.870286    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:46.882772    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:46.882787    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:46.894698    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:46.894713    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:46.915082    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:46.915096    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:46.930035    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:46.930050    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:46.958524    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:46.958533    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:46.997265    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:46.997281    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:47.014697    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:47.014711    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:47.026078    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:47.026092    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:47.042945    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:47.042958    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:47.047318    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:47.047323    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:47.058735    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:47.058751    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:47.072420    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:47.072434    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:50.510048    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:50.510307    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:50.541371    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:50.541500    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:50.560313    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:50.560403    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:50.573822    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:50.573891    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:50.585345    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:50.585416    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:50.596164    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:50.596242    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:50.606907    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:50.606978    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:50.626019    8395 logs.go:276] 0 containers: []
	W0617 04:45:50.626030    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:50.626087    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:50.637035    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:50.637054    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:50.637060    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:50.651089    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:50.651103    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:50.663507    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:50.663519    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:50.678469    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:50.678482    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:50.700718    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:50.700729    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:50.712697    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:50.712710    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:50.724108    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:50.724119    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:50.735599    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:50.735610    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:50.759579    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:50.759586    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:50.797416    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:50.797433    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:50.801935    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:50.801944    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:50.839843    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:50.839854    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:50.854383    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:50.854392    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:53.368353    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:49.591308    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:58.369312    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:58.369568    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:58.391073    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:45:58.391169    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:58.407124    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:45:58.407205    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:54.593649    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:54.593815    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:54.611483    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:54.611575    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:54.625122    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:54.625188    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:54.637713    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:54.637779    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:54.648052    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:54.648127    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:54.658503    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:54.658566    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:54.669430    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:54.669501    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:54.680021    8538 logs.go:276] 0 containers: []
	W0617 04:45:54.680034    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:54.680098    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:54.693562    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:54.693581    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:54.693586    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:54.728368    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:54.728381    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:54.749911    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:54.749921    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:54.761427    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:54.761438    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:54.786580    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:54.786587    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:54.815277    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:54.815287    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:54.829143    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:54.829154    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:54.842692    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:54.842702    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:54.856798    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:54.856808    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:54.871633    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:54.871642    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:54.888899    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:54.888907    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:54.900620    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:54.900631    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:54.918014    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:54.918030    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:54.931194    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:54.931205    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:54.935407    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:54.935415    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:54.948368    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:54.948379    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:54.959798    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:54.959810    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:57.474727    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:58.419558    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:45:58.419629    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:58.430593    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:45:58.430660    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:58.440830    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:45:58.440903    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:58.451527    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:45:58.451594    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:58.462389    8395 logs.go:276] 0 containers: []
	W0617 04:45:58.462402    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:58.462460    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:58.472950    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:45:58.472966    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:45:58.472972    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:45:58.487788    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:45:58.487801    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:45:58.499617    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:45:58.499628    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:45:58.517365    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:45:58.517376    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:58.529089    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:45:58.529103    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:45:58.544177    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:45:58.544191    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:45:58.555604    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:45:58.555615    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:45:58.566879    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:45:58.566890    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:45:58.582088    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:58.582100    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:58.621954    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:58.621966    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:58.627334    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:58.627344    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:58.662596    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:45:58.662609    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:45:58.677574    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:58.677587    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:01.206303    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:02.477107    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:02.477312    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:02.498505    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:02.498605    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:02.513541    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:02.513624    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:02.525104    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:02.525176    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:02.535880    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:02.535952    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:02.547557    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:02.547622    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:02.557776    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:02.557846    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:02.567618    8538 logs.go:276] 0 containers: []
	W0617 04:46:02.567630    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:02.567692    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:02.578121    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:02.578138    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:02.578144    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:02.592591    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:02.592604    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:02.604919    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:02.604930    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:02.615942    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:02.615954    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:02.627994    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:02.628003    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:02.632012    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:02.632018    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:02.647730    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:02.647743    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:02.659541    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:02.659553    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:02.683933    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:02.683948    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:02.712179    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:02.712188    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:02.725605    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:02.725617    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:02.743268    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:02.743282    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:02.759994    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:02.760007    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:02.776640    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:02.776655    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:02.817390    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:02.817401    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:02.831684    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:02.831694    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:02.843638    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:02.843651    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:06.208668    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:06.208958    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:06.234657    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:06.234782    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:06.252418    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:06.252496    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:06.265288    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:06.265363    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:06.276899    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:06.276975    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:06.287577    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:06.287650    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:06.298343    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:06.298412    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:06.308510    8395 logs.go:276] 0 containers: []
	W0617 04:46:06.308522    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:06.308581    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:06.319287    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:06.319301    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:06.319306    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:06.330766    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:06.330777    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:06.335150    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:06.335157    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:06.346184    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:06.346198    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:06.361092    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:06.361101    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:06.372341    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:06.372350    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:06.393998    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:06.394012    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:06.418746    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:06.418754    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:06.456444    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:06.456451    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:06.495616    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:06.495629    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:06.509596    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:06.509610    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:06.523068    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:06.523078    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:06.534803    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:06.534816    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:05.371066    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:09.048086    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:10.373301    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:10.373498    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:10.389619    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:10.389699    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:10.401778    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:10.401847    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:10.412513    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:10.412579    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:10.422910    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:10.422978    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:10.433347    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:10.433420    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:10.444148    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:10.444213    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:10.454517    8538 logs.go:276] 0 containers: []
	W0617 04:46:10.454528    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:10.454582    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:10.465391    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:10.465407    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:10.465412    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:10.482238    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:10.482249    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:10.493657    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:10.493667    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:10.514108    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:10.514118    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:10.537656    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:10.537669    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:10.549793    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:10.549805    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:10.575033    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:10.575042    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:10.591985    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:10.591999    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:10.609038    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:10.609050    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:10.627731    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:10.627742    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:10.639577    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:10.639589    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:10.652789    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:10.652799    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:10.666843    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:10.666854    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:10.697151    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:10.697164    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:10.732782    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:10.732793    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:10.747007    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:10.747016    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:10.751852    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:10.751859    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:13.268642    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:14.050385    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:14.050782    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:14.088309    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:14.088434    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:14.108381    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:14.108463    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:14.123871    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:14.123953    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:14.135827    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:14.135901    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:14.147381    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:14.147450    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:14.165903    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:14.165980    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:14.179949    8395 logs.go:276] 0 containers: []
	W0617 04:46:14.179962    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:14.180023    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:14.190039    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:14.190054    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:14.190062    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:14.204122    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:14.204133    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:14.228916    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:14.228930    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:14.244355    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:14.244368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:14.262215    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:14.262227    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:14.273648    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:14.273662    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:14.285301    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:14.285312    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:14.309616    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:14.309626    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:14.321553    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:14.321566    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:14.359599    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:14.359606    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:14.363842    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:14.363849    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:14.398500    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:14.398511    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:14.412843    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:14.412858    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:16.926427    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:18.271208    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:18.271444    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:18.292115    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:18.292211    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:18.307523    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:18.307601    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:18.319883    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:18.319956    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:18.330973    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:18.331040    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:18.341430    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:18.341503    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:18.352445    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:18.352507    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:18.366927    8538 logs.go:276] 0 containers: []
	W0617 04:46:18.366939    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:18.366998    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:18.377526    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:18.377543    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:18.377550    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:18.407540    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:18.407551    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:18.428184    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:18.428195    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:18.440329    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:18.440344    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:18.452352    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:18.452363    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:18.486768    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:18.486782    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:18.502092    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:18.502104    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:18.519173    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:18.519186    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:18.529822    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:18.529834    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:18.547700    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:18.547710    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:18.562073    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:18.562089    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:18.566720    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:18.566729    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:18.579517    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:18.579527    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:18.595702    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:18.595715    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:18.607060    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:18.607075    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:18.624483    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:18.624494    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:18.635783    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:18.635795    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:21.928793    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:21.929002    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:21.957532    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:21.957642    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:21.976371    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:21.976439    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:21.988626    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:21.988692    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:21.999090    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:21.999152    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:22.016263    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:22.016338    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:22.026651    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:22.026716    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:22.036795    8395 logs.go:276] 0 containers: []
	W0617 04:46:22.036807    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:22.036869    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:22.047091    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:22.047107    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:22.047115    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:22.062276    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:22.062287    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:22.077316    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:22.077329    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:22.094990    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:22.095003    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:22.106549    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:22.106559    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:22.144328    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:22.144338    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:22.148555    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:22.148562    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:22.183225    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:22.183239    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:22.197487    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:22.197498    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:22.210467    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:22.210480    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:22.229328    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:22.229342    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:22.241254    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:22.241264    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:22.253477    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:22.253487    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:21.162708    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:24.779344    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:26.165077    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:26.165391    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:26.195250    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:26.195370    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:26.213045    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:26.213128    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:26.233015    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:26.233094    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:26.247985    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:26.248056    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:26.264424    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:26.264490    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:26.275409    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:26.275479    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:26.285400    8538 logs.go:276] 0 containers: []
	W0617 04:46:26.285411    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:26.285467    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:26.295769    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:26.295786    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:26.295791    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:26.319269    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:26.319281    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:26.333192    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:26.333203    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:26.344613    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:26.344623    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:26.355904    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:26.355914    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:26.376356    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:26.376369    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:26.391672    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:26.391683    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:26.408942    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:26.408953    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:26.420280    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:26.420290    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:26.432660    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:26.432674    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:26.447986    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:26.447997    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:26.466117    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:26.466127    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:26.477759    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:26.477773    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:26.489507    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:26.489518    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:26.519285    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:26.519296    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:26.523581    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:26.523587    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:26.558632    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:26.558643    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:29.075661    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:29.781673    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:29.781821    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:29.797641    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:29.797728    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:29.812504    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:29.812576    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:29.824153    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:29.824215    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:29.834951    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:29.835019    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:29.845824    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:29.845896    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:29.860637    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:29.860706    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:29.870501    8395 logs.go:276] 0 containers: []
	W0617 04:46:29.870514    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:29.870584    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:29.880785    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:29.880801    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:29.880806    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:29.892328    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:29.892342    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:29.926500    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:29.926526    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:29.938389    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:29.938400    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:29.953591    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:29.953600    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:29.972585    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:29.972595    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:29.984312    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:29.984326    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:30.004661    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:30.004674    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:30.027692    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:30.027701    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:30.063578    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:30.063585    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:30.067932    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:30.067938    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:30.081891    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:30.081905    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:30.096282    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:30.096292    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:32.608603    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:34.077827    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:34.077981    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:34.095605    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:34.095697    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:34.108892    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:34.108961    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:34.119500    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:34.119558    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:34.129829    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:34.129903    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:34.140628    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:34.140694    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:34.151109    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:34.151175    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:34.164430    8538 logs.go:276] 0 containers: []
	W0617 04:46:34.164442    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:34.164497    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:34.175211    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:34.175235    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:34.175240    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:34.198446    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:34.198454    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:34.211256    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:34.211268    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:34.227963    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:34.227973    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:34.262792    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:34.262805    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:34.276673    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:34.276684    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:34.288234    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:34.288246    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:34.311456    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:34.311466    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:34.316193    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:34.316199    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:34.329299    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:34.329309    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:34.340869    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:34.340880    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:34.351895    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:34.351910    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:34.380687    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:34.380697    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:34.395035    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:34.395046    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:34.416149    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:34.416163    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:34.432220    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:34.432230    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:34.444055    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:34.444066    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:37.609392    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:37.609600    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:37.637536    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:37.637657    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:37.654626    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:37.654708    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:37.668186    8395 logs.go:276] 2 containers: [5184e943075e c26f91c53a8c]
	I0617 04:46:37.668261    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:37.679877    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:37.679949    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:37.691531    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:37.691608    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:37.701683    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:37.701751    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:37.711557    8395 logs.go:276] 0 containers: []
	W0617 04:46:37.711570    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:37.711630    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:37.721811    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:37.721827    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:37.721833    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:37.726394    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:37.726404    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:37.740567    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:37.740577    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:37.752202    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:37.752216    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:37.763392    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:37.763405    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:37.778241    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:37.778250    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:37.795735    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:37.795745    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:37.807927    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:37.807941    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:37.845595    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:37.845606    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:37.903313    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:37.903326    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:37.932273    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:37.932286    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:37.960629    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:37.960646    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:37.997943    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:37.997957    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:36.966106    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:40.529582    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:41.968311    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:41.968539    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:41.992413    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:41.992537    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:42.009675    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:42.009751    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:42.022791    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:42.022868    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:42.036807    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:42.036877    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:42.047546    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:42.047616    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:42.061260    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:42.061327    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:42.071288    8538 logs.go:276] 0 containers: []
	W0617 04:46:42.071301    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:42.071360    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:42.085206    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:42.085227    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:42.085232    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:42.105711    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:42.105723    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:42.123656    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:42.123667    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:42.152311    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:42.152321    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:42.191541    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:42.191555    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:42.204289    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:42.204300    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:42.216386    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:42.216399    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:42.227534    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:42.227548    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:42.244695    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:42.244705    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:42.257194    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:42.257206    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:42.269074    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:42.269087    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:42.273327    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:42.273335    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:42.287357    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:42.287369    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:42.301781    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:42.301799    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:42.326010    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:42.326032    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:42.340020    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:42.340032    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:42.360968    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:42.360980    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:45.531817    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:45.531935    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:45.543279    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:45.543349    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:45.553734    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:45.553802    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:45.564739    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:46:45.564815    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:45.575813    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:45.575881    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:45.586503    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:45.586579    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:45.596904    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:45.596997    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:45.608468    8395 logs.go:276] 0 containers: []
	W0617 04:46:45.608479    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:45.608533    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:45.619856    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:45.619875    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:45.619881    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:45.633580    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:46:45.633594    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:46:45.644463    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:45.644473    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:45.656426    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:45.656440    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:45.667832    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:45.667845    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:45.685460    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:45.685472    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:45.729833    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:46:45.729847    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:46:45.741640    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:45.741652    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:45.755698    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:45.755711    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:45.770665    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:45.770676    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:45.807346    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:45.807354    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:45.811691    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:45.811697    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:45.836237    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:45.836247    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:45.847690    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:45.847701    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:45.859998    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:45.860012    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:48.373758    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:44.874957    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:53.376054    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:53.376509    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:49.877182    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:49.877328    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:49.890515    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:49.890590    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:49.901822    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:49.901895    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:49.911991    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:49.912057    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:49.922498    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:49.922576    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:49.933370    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:49.933432    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:49.944070    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:49.944145    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:49.954327    8538 logs.go:276] 0 containers: []
	W0617 04:46:49.954339    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:49.954398    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:49.965012    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:49.965030    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:49.965036    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:50.000006    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:50.000017    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:50.015886    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:50.015895    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:50.028891    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:50.028904    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:50.042662    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:50.042674    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:50.063124    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:50.063135    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:50.085642    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:50.085655    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:50.107104    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:50.107115    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:50.124312    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:50.124323    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:50.141606    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:50.141616    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:50.171357    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:50.171372    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:50.175899    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:50.175910    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:50.190416    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:50.190426    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:50.202537    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:50.202548    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:50.218792    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:50.218804    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:50.241951    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:50.241961    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:50.256213    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:50.256224    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:52.770314    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:53.420903    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:46:53.421030    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:53.439875    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:46:53.439971    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:53.454806    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:46:53.454883    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:53.467055    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:46:53.467129    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:53.484378    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:46:53.484443    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:53.495313    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:46:53.495384    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:53.511585    8395 logs.go:276] 0 containers: []
	W0617 04:46:53.511597    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:53.511657    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:53.525522    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:46:53.525541    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:53.525547    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:53.530469    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:46:53.530476    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:46:53.542476    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:46:53.542488    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:46:53.560066    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:46:53.560077    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:46:53.571571    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:46:53.571582    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:53.584244    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:53.584258    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:53.622147    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:53.622161    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:53.658627    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:46:53.658640    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:46:53.673641    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:46:53.673651    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:46:53.688326    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:46:53.688336    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:46:53.699672    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:46:53.699686    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:46:53.711419    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:53.711431    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:53.735070    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:46:53.735078    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:46:53.746338    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:46:53.746349    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:46:53.761270    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:46:53.761281    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:46:56.274924    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:57.772758    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:57.772987    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:57.796370    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:57.796482    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:57.813084    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:57.813164    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:57.826230    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:57.826307    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:57.837236    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:57.837307    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:57.847656    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:57.847722    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:57.858239    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:57.858308    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:57.872801    8538 logs.go:276] 0 containers: []
	W0617 04:46:57.872813    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:57.872870    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:57.883121    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:57.883140    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:57.883146    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:57.918890    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:57.918900    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:57.931745    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:57.931754    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:57.942298    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:57.942311    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:57.971464    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:57.971473    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:57.975681    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:57.975689    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:57.987058    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:57.987067    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:58.004163    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:58.004174    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:58.015948    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:58.015959    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:58.027556    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:58.027568    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:58.040024    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:58.040035    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:58.057906    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:58.057916    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:58.075676    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:58.075687    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:58.099820    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:58.099828    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:58.113811    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:58.113821    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:58.128007    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:58.128017    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:58.154012    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:58.154027    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:01.277499    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:01.277734    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:01.304076    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:01.304201    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:01.322017    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:01.322102    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:01.336353    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:01.336429    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:01.347914    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:01.347992    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:01.359449    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:01.359523    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:01.371238    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:01.371304    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:01.381727    8395 logs.go:276] 0 containers: []
	W0617 04:47:01.381738    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:01.381789    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:01.392285    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:01.392315    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:01.392323    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:01.407296    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:01.407311    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:01.419320    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:01.419331    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:01.433620    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:01.433631    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:01.452641    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:01.452651    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:01.463533    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:01.463544    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:01.481029    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:01.481041    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:01.498812    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:01.498823    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:01.510855    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:01.510865    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:01.550282    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:01.550293    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:01.586908    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:01.586918    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:01.591559    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:01.591566    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:01.603677    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:01.603690    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:01.628278    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:01.628289    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:01.642354    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:01.642368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:00.673504    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:04.159552    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:05.675721    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:05.675820    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:05.686404    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:05.686478    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:05.696805    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:05.696881    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:05.707404    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:05.707476    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:05.717443    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:05.717515    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:05.727993    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:05.728058    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:05.738871    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:05.738938    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:05.749000    8538 logs.go:276] 0 containers: []
	W0617 04:47:05.749013    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:05.749076    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:05.759914    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:05.759932    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:05.759937    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:05.797026    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:05.797039    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:05.811409    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:05.811420    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:05.822989    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:05.823003    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:05.851628    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:05.851642    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:05.865396    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:05.865407    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:05.877988    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:05.877998    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:05.889008    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:05.889023    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:05.907085    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:05.907095    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:05.921014    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:05.921023    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:05.936702    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:05.936712    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:05.948357    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:05.948370    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:05.965708    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:05.965723    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:05.977330    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:05.977341    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:05.991078    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:05.991088    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:05.995116    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:05.995124    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:06.016128    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:06.016139    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:08.542104    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:09.161801    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:09.161939    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:09.174068    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:09.174139    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:09.184747    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:09.184822    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:09.195139    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:09.195212    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:09.205792    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:09.205866    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:09.217070    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:09.217140    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:09.227421    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:09.227481    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:09.238039    8395 logs.go:276] 0 containers: []
	W0617 04:47:09.238048    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:09.238096    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:09.248741    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:09.248755    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:09.248760    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:09.286409    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:09.286421    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:09.323998    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:09.324012    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:09.342851    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:09.342865    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:09.354710    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:09.354722    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:09.359467    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:09.359476    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:09.370857    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:09.370867    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:09.382435    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:09.382447    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:09.399585    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:09.399595    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:09.411736    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:09.411746    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:09.423112    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:09.423122    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:09.440044    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:09.440055    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:09.451928    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:09.451938    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:09.469844    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:09.469858    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:09.481940    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:09.481951    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:12.008126    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:13.544715    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:13.545109    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:13.578840    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:13.578989    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:13.606642    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:13.606727    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:13.619828    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:13.619907    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:13.631718    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:13.631792    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:13.643094    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:13.643163    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:13.655519    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:13.655589    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:13.665775    8538 logs.go:276] 0 containers: []
	W0617 04:47:13.665785    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:13.665843    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:13.676199    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:13.676218    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:13.676224    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:13.711098    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:13.711109    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:13.724064    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:13.724073    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:13.734772    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:13.734785    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:13.750382    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:13.750395    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:13.763236    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:13.763248    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:13.784047    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:13.784058    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:13.800894    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:13.800905    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:13.828763    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:13.828771    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:13.842802    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:13.842814    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:13.854550    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:13.854560    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:13.872108    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:13.872118    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:13.883342    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:13.883351    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:13.907319    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:13.907327    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:13.920053    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:13.920064    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:13.924641    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:13.924647    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:13.938806    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:13.938818    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:17.010523    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:17.010662    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:17.022218    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:17.022285    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:17.033407    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:17.033478    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:17.044598    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:17.044666    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:17.055941    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:17.056002    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:17.067750    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:17.067823    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:17.080479    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:17.080548    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:17.090817    8395 logs.go:276] 0 containers: []
	W0617 04:47:17.090831    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:17.090888    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:17.102214    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:17.102231    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:17.102237    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:17.140093    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:17.140107    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:17.158195    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:17.158205    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:17.174905    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:17.174918    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:17.189438    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:17.189448    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:17.201653    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:17.201664    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:17.217495    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:17.217507    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:17.241007    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:17.241014    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:17.245292    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:17.245301    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:17.257908    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:17.257921    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:17.270102    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:17.270112    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:17.282301    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:17.282314    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:17.318998    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:17.319005    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:17.333841    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:17.333849    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:17.349071    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:17.349085    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:16.456332    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:19.863117    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:21.457040    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:21.457348    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:21.493602    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:21.493715    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:21.509993    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:21.510078    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:21.522514    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:21.522592    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:21.537925    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:21.537998    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:21.548650    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:21.548720    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:21.559359    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:21.559426    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:21.570196    8538 logs.go:276] 0 containers: []
	W0617 04:47:21.570207    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:21.570258    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:21.581669    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:21.581686    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:21.581691    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:21.593312    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:21.593324    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:21.614590    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:21.614602    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:21.637824    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:21.637837    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:21.665273    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:21.665285    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:21.679992    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:21.680003    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:21.700638    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:21.700650    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:21.713085    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:21.713096    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:21.746857    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:21.746869    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:21.759982    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:21.759996    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:21.777476    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:21.777490    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:21.788487    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:21.788501    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:21.802687    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:21.802697    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:21.814541    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:21.814555    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:21.831627    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:21.831642    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:21.842685    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:21.842696    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:21.847286    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:21.847293    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:24.364306    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:24.865541    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:24.865704    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:24.883478    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:24.883566    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:24.897548    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:24.897628    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:24.910190    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:24.910263    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:24.921013    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:24.921082    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:24.930976    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:24.931043    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:24.942205    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:24.942275    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:24.956266    8395 logs.go:276] 0 containers: []
	W0617 04:47:24.956278    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:24.956330    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:24.967020    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:24.967037    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:24.967042    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:24.984298    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:24.984308    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:24.995646    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:24.995657    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:25.018968    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:25.018976    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:25.036439    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:25.036449    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:25.071830    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:25.071841    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:25.076135    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:25.076141    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:25.090649    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:25.090662    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:25.104610    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:25.104621    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:25.117292    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:25.117303    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:25.154018    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:25.154031    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:25.165686    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:25.165700    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:25.178029    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:25.178041    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:25.189634    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:25.189643    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:25.201521    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:25.201533    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:27.718052    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:29.366487    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:29.366618    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:29.381192    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:29.381284    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:29.393759    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:29.393822    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:29.407831    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:29.407893    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:29.418422    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:29.418485    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:29.433825    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:29.433902    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:29.444248    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:29.444314    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:29.454504    8538 logs.go:276] 0 containers: []
	W0617 04:47:29.454519    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:29.454584    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:32.719744    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:32.719865    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:32.736516    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:32.736594    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:32.753508    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:32.753585    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:32.764586    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:32.764655    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:32.774859    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:32.774925    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:32.785886    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:32.785957    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:32.796144    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:32.796218    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:32.806423    8395 logs.go:276] 0 containers: []
	W0617 04:47:32.806445    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:32.806499    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:32.816847    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:32.816865    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:32.816872    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:32.852438    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:32.852451    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:32.869411    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:32.869424    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:32.906681    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:32.906688    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:32.911085    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:32.911091    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:32.926497    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:32.926507    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:32.938602    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:32.938614    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:32.953245    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:32.953255    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:32.965375    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:32.965386    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:32.977287    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:32.977297    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:32.992901    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:32.992911    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:33.017650    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:33.017659    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:33.030057    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:33.030068    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:33.042381    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:33.042391    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:33.053774    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:33.053785    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:29.464745    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:29.468570    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:29.468582    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:29.486749    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:29.486760    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:29.498583    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:29.498597    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:29.521028    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:29.521034    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:29.554738    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:29.554750    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:29.568943    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:29.568954    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:29.591685    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:29.591697    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:29.603796    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:29.603806    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:29.615376    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:29.615388    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:29.630524    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:29.630534    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:29.641724    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:29.641734    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:29.657264    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:29.657276    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:29.669601    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:29.669616    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:29.673907    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:29.673916    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:29.696970    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:29.696982    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:29.725815    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:29.725834    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:29.740005    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:29.740016    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:32.256082    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:37.257405    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:37.257482    8538 kubeadm.go:591] duration metric: took 4m3.634094791s to restartPrimaryControlPlane
	W0617 04:47:37.257545    8538 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 04:47:37.257575    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0617 04:47:38.231642    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 04:47:38.236705    8538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 04:47:38.239467    8538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 04:47:38.242286    8538 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 04:47:38.242293    8538 kubeadm.go:156] found existing configuration files:
	
	I0617 04:47:38.242315    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf
	I0617 04:47:38.245049    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 04:47:38.245074    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 04:47:38.247594    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf
	I0617 04:47:38.250521    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 04:47:38.250544    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 04:47:38.253835    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf
	I0617 04:47:38.256378    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 04:47:38.256401    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 04:47:38.258917    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf
	I0617 04:47:38.262080    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 04:47:38.262105    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 04:47:38.266879    8538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 04:47:38.284575    8538 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0617 04:47:38.284640    8538 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 04:47:38.334908    8538 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 04:47:38.334971    8538 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 04:47:38.335025    8538 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 04:47:38.384440    8538 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 04:47:35.570523    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:38.392525    8538 out.go:204]   - Generating certificates and keys ...
	I0617 04:47:38.392557    8538 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 04:47:38.392590    8538 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 04:47:38.392628    8538 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 04:47:38.392661    8538 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 04:47:38.392702    8538 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 04:47:38.392737    8538 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 04:47:38.392774    8538 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 04:47:38.392810    8538 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 04:47:38.392852    8538 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 04:47:38.392892    8538 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 04:47:38.392910    8538 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 04:47:38.392940    8538 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 04:47:38.515362    8538 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 04:47:38.644858    8538 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 04:47:38.720688    8538 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 04:47:38.842351    8538 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 04:47:38.870997    8538 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 04:47:38.871441    8538 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 04:47:38.871466    8538 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 04:47:38.937191    8538 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 04:47:38.941356    8538 out.go:204]   - Booting up control plane ...
	I0617 04:47:38.941545    8538 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 04:47:38.941646    8538 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 04:47:38.941702    8538 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 04:47:38.941741    8538 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 04:47:38.941845    8538 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 04:47:40.572827    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:40.572939    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:40.585579    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:40.585654    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:40.597122    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:40.597194    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:40.608650    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:40.608719    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:40.622352    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:40.622423    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:40.633799    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:40.633875    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:40.644769    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:40.644837    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:40.655798    8395 logs.go:276] 0 containers: []
	W0617 04:47:40.655812    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:40.655877    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:40.666710    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:40.666731    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:40.666738    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:40.708443    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:40.708463    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:40.713377    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:40.713389    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:40.737946    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:40.737957    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:40.750477    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:40.750489    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:40.776034    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:40.776049    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:40.798508    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:40.798525    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:40.815663    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:40.815687    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:40.828361    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:40.828375    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:40.865778    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:40.865791    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:40.884154    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:40.884167    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:40.896385    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:40.896395    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:40.908461    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:40.908473    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:40.930355    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:40.930368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:40.946954    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:40.946967    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:43.440624    8538 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501282 seconds
	I0617 04:47:43.440795    8538 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 04:47:43.446014    8538 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 04:47:43.955056    8538 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 04:47:43.955179    8538 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-767000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 04:47:44.459918    8538 kubeadm.go:309] [bootstrap-token] Using token: 3k16i9.lt87x78crfyjzuv5
	I0617 04:47:44.464216    8538 out.go:204]   - Configuring RBAC rules ...
	I0617 04:47:44.464263    8538 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 04:47:44.474093    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 04:47:44.476065    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 04:47:44.477990    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 04:47:44.479300    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 04:47:44.481417    8538 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 04:47:44.485343    8538 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 04:47:44.635917    8538 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 04:47:44.863679    8538 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 04:47:44.864176    8538 kubeadm.go:309] 
	I0617 04:47:44.864205    8538 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 04:47:44.864207    8538 kubeadm.go:309] 
	I0617 04:47:44.864240    8538 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 04:47:44.864248    8538 kubeadm.go:309] 
	I0617 04:47:44.864266    8538 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 04:47:44.864298    8538 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 04:47:44.864348    8538 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 04:47:44.864351    8538 kubeadm.go:309] 
	I0617 04:47:44.864382    8538 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 04:47:44.864390    8538 kubeadm.go:309] 
	I0617 04:47:44.864418    8538 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 04:47:44.864422    8538 kubeadm.go:309] 
	I0617 04:47:44.864446    8538 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 04:47:44.864496    8538 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 04:47:44.864559    8538 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 04:47:44.864563    8538 kubeadm.go:309] 
	I0617 04:47:44.864605    8538 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 04:47:44.864688    8538 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 04:47:44.864693    8538 kubeadm.go:309] 
	I0617 04:47:44.864734    8538 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3k16i9.lt87x78crfyjzuv5 \
	I0617 04:47:44.864783    8538 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb \
	I0617 04:47:44.864795    8538 kubeadm.go:309] 	--control-plane 
	I0617 04:47:44.864799    8538 kubeadm.go:309] 
	I0617 04:47:44.864839    8538 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 04:47:44.864844    8538 kubeadm.go:309] 
	I0617 04:47:44.864882    8538 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3k16i9.lt87x78crfyjzuv5 \
	I0617 04:47:44.864952    8538 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb 
	I0617 04:47:44.865264    8538 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 04:47:44.865275    8538 cni.go:84] Creating CNI manager for ""
	I0617 04:47:44.865283    8538 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:47:44.869086    8538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 04:47:44.872896    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 04:47:44.875899    8538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 04:47:44.881425    8538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 04:47:44.881496    8538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-767000 minikube.k8s.io/updated_at=2024_06_17T04_47_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=84fc08e1aa3123a23ee19b25404b578b39fd2f91 minikube.k8s.io/name=stopped-upgrade-767000 minikube.k8s.io/primary=true
	I0617 04:47:44.881499    8538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 04:47:44.914264    8538 kubeadm.go:1107] duration metric: took 32.798792ms to wait for elevateKubeSystemPrivileges
	I0617 04:47:44.923989    8538 ops.go:34] apiserver oom_adj: -16
	W0617 04:47:44.924021    8538 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 04:47:44.924035    8538 kubeadm.go:393] duration metric: took 4m11.314278458s to StartCluster
	I0617 04:47:44.924045    8538 settings.go:142] acquiring lock: {Name:mkdf59d9cf591c81341c913869983ffa33afef47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:47:44.924136    8538 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:47:44.924540    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/kubeconfig: {Name:mk50fd79b579920a7f11ac34f212a8491ceefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:47:44.924770    8538 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:47:44.928897    8538 out.go:177] * Verifying Kubernetes components...
	I0617 04:47:44.924779    8538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 04:47:44.924863    8538 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:47:44.933037    8538 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-767000"
	I0617 04:47:44.933039    8538 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-767000"
	I0617 04:47:44.933051    8538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-767000"
	I0617 04:47:44.933053    8538 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-767000"
	W0617 04:47:44.933056    8538 addons.go:243] addon storage-provisioner should already be in state true
	I0617 04:47:44.933067    8538 host.go:66] Checking if "stopped-upgrade-767000" exists ...
	I0617 04:47:44.933101    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:47:44.937914    8538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:47:43.463789    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:44.940959    8538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:47:44.940966    8538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 04:47:44.940973    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:47:44.942107    8538 kapi.go:59] client config for stopped-upgrade-767000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.key", CAFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025a0460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 04:47:44.942243    8538 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-767000"
	W0617 04:47:44.942249    8538 addons.go:243] addon default-storageclass should already be in state true
	I0617 04:47:44.942261    8538 host.go:66] Checking if "stopped-upgrade-767000" exists ...
	I0617 04:47:44.943004    8538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 04:47:44.943008    8538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 04:47:44.943012    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:47:45.006041    8538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 04:47:45.011549    8538 api_server.go:52] waiting for apiserver process to appear ...
	I0617 04:47:45.011589    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:47:45.015226    8538 api_server.go:72] duration metric: took 90.447459ms to wait for apiserver process to appear ...
	I0617 04:47:45.015234    8538 api_server.go:88] waiting for apiserver healthz status ...
	I0617 04:47:45.015240    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:45.035675    8538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:47:45.037192    8538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 04:47:48.466012    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:48.466192    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:48.478392    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:48.478465    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:48.489118    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:48.489188    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:48.500005    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:48.500085    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:48.510346    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:48.510427    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:48.520714    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:48.520773    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:48.531127    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:48.531190    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:48.541655    8395 logs.go:276] 0 containers: []
	W0617 04:47:48.541666    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:48.541722    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:48.556447    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:48.556465    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:48.556471    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:48.568130    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:48.568140    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:48.583189    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:48.583200    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:48.600931    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:48.600943    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:48.624583    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:48.624594    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:48.638394    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:48.638403    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:48.653677    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:48.653688    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:48.665587    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:48.665600    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:48.682090    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:48.682103    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:48.687263    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:48.687270    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:48.700669    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:48.700680    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:48.712328    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:48.712339    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:48.750663    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:48.750682    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:48.765928    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:48.765940    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:48.777636    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:48.777646    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:51.314844    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:50.017281    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:50.017308    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:56.315978    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:56.316149    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:56.328840    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:47:56.328912    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:56.339230    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:47:56.339304    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:56.350004    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:47:56.350073    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:56.360773    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:47:56.360844    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:56.370843    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:47:56.370910    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:56.381885    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:47:56.381963    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:56.392827    8395 logs.go:276] 0 containers: []
	W0617 04:47:56.392838    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:56.392897    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:56.403104    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:47:56.403121    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:56.403127    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:56.407635    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:47:56.407641    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:47:56.419357    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:47:56.419368    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:47:56.431203    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:47:56.431214    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:47:56.442747    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:47:56.442757    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:56.455695    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:47:56.455706    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:47:56.467902    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:47:56.467918    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:47:56.480253    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:47:56.480267    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:47:56.497933    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:56.497945    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:56.522720    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:47:56.522735    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:47:56.542630    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:47:56.542644    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:47:56.557643    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:56.557654    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:56.596260    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:56.596272    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:56.633299    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:47:56.633313    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:47:56.648642    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:47:56.648656    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:47:55.017464    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:55.017491    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:59.162036    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:00.017934    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:00.017957    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:04.164279    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:04.164396    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:48:04.176168    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:48:04.176244    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:48:04.187400    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:48:04.187472    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:48:04.197892    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:48:04.197963    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:48:04.208468    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:48:04.208536    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:48:04.219225    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:48:04.219293    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:48:04.229829    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:48:04.229895    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:48:04.240234    8395 logs.go:276] 0 containers: []
	W0617 04:48:04.240249    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:48:04.240310    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:48:04.254181    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:48:04.254201    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:48:04.254206    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:48:04.265983    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:48:04.265996    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:48:04.283409    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:48:04.283419    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:48:04.294978    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:48:04.294987    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:48:04.299493    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:48:04.299502    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:48:04.313916    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:48:04.313928    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:48:04.325685    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:48:04.325694    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:48:04.340252    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:48:04.340265    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:48:04.351639    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:48:04.351649    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:48:04.366317    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:48:04.366330    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:48:04.405235    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:48:04.405250    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:48:04.417477    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:48:04.417487    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:48:04.455722    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:48:04.455734    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:48:04.467796    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:48:04.467807    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:48:04.490917    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:48:04.490929    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:48:07.004935    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:05.018313    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:05.018356    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:12.007209    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:12.007461    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:48:12.027966    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:48:12.028063    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:48:12.042348    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:48:12.042418    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:48:12.054031    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:48:12.054105    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:48:12.064903    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:48:12.064964    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:48:12.076113    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:48:12.076185    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:48:12.087450    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:48:12.087521    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:48:12.098021    8395 logs.go:276] 0 containers: []
	W0617 04:48:12.098034    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:48:12.098088    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:48:12.109052    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:48:12.109067    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:48:12.109072    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:48:12.120525    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:48:12.120541    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:48:12.145703    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:48:12.145716    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:48:12.179629    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:48:12.179643    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:48:12.196737    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:48:12.196747    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:48:12.211485    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:48:12.211496    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:48:12.223587    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:48:12.223597    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:48:12.235351    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:48:12.235362    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:48:12.253116    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:48:12.253136    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:48:12.264972    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:48:12.264985    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:48:12.269332    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:48:12.269342    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:48:12.280803    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:48:12.280820    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:48:12.317907    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:48:12.317916    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:48:12.332396    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:48:12.332408    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:48:12.344414    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:48:12.344426    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:48:10.018882    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:10.018932    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:15.019690    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:15.019736    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0617 04:48:15.412329    8538 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0617 04:48:15.415460    8538 out.go:177] * Enabled addons: storage-provisioner
	I0617 04:48:14.858390    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:15.422463    8538 addons.go:510] duration metric: took 30.498001834s for enable addons: enabled=[storage-provisioner]
	I0617 04:48:19.860643    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:19.860781    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:48:19.874452    8395 logs.go:276] 1 containers: [b453f811aa37]
	I0617 04:48:19.874532    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:48:19.885087    8395 logs.go:276] 1 containers: [116b8558a8ab]
	I0617 04:48:19.885165    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:48:19.896397    8395 logs.go:276] 4 containers: [7afce939153e 9841b8b73fc7 5184e943075e c26f91c53a8c]
	I0617 04:48:19.896472    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:48:19.906816    8395 logs.go:276] 1 containers: [5129b5f1d898]
	I0617 04:48:19.906884    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:48:19.916973    8395 logs.go:276] 1 containers: [eecf9fc23d8c]
	I0617 04:48:19.917037    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:48:19.927589    8395 logs.go:276] 1 containers: [8ccd8d10ee88]
	I0617 04:48:19.927646    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:48:19.938374    8395 logs.go:276] 0 containers: []
	W0617 04:48:19.938392    8395 logs.go:278] No container was found matching "kindnet"
	I0617 04:48:19.938457    8395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:48:19.949321    8395 logs.go:276] 1 containers: [4e7e41cba40d]
	I0617 04:48:19.949338    8395 logs.go:123] Gathering logs for kube-apiserver [b453f811aa37] ...
	I0617 04:48:19.949344    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b453f811aa37"
	I0617 04:48:19.963663    8395 logs.go:123] Gathering logs for coredns [5184e943075e] ...
	I0617 04:48:19.963676    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5184e943075e"
	I0617 04:48:19.975318    8395 logs.go:123] Gathering logs for kube-scheduler [5129b5f1d898] ...
	I0617 04:48:19.975329    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5129b5f1d898"
	I0617 04:48:19.989957    8395 logs.go:123] Gathering logs for kube-proxy [eecf9fc23d8c] ...
	I0617 04:48:19.989970    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eecf9fc23d8c"
	I0617 04:48:20.001427    8395 logs.go:123] Gathering logs for kube-controller-manager [8ccd8d10ee88] ...
	I0617 04:48:20.001439    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ccd8d10ee88"
	I0617 04:48:20.018570    8395 logs.go:123] Gathering logs for kubelet ...
	I0617 04:48:20.018582    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:48:20.056931    8395 logs.go:123] Gathering logs for dmesg ...
	I0617 04:48:20.056942    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:48:20.061620    8395 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:48:20.061626    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:48:20.097981    8395 logs.go:123] Gathering logs for coredns [7afce939153e] ...
	I0617 04:48:20.097994    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7afce939153e"
	I0617 04:48:20.110293    8395 logs.go:123] Gathering logs for Docker ...
	I0617 04:48:20.110307    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:48:20.134472    8395 logs.go:123] Gathering logs for etcd [116b8558a8ab] ...
	I0617 04:48:20.134485    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 116b8558a8ab"
	I0617 04:48:20.151587    8395 logs.go:123] Gathering logs for coredns [c26f91c53a8c] ...
	I0617 04:48:20.151601    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c26f91c53a8c"
	I0617 04:48:20.162897    8395 logs.go:123] Gathering logs for storage-provisioner [4e7e41cba40d] ...
	I0617 04:48:20.162908    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e7e41cba40d"
	I0617 04:48:20.174291    8395 logs.go:123] Gathering logs for coredns [9841b8b73fc7] ...
	I0617 04:48:20.174306    8395 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9841b8b73fc7"
	I0617 04:48:20.190366    8395 logs.go:123] Gathering logs for container status ...
	I0617 04:48:20.190379    8395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:48:22.703218    8395 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:20.020574    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:20.020591    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:27.705571    8395 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:27.710021    8395 out.go:177] 
	W0617 04:48:27.714009    8395 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0617 04:48:27.714019    8395 out.go:239] * 
	W0617 04:48:27.714738    8395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:48:27.725830    8395 out.go:177] 
	I0617 04:48:25.021671    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:25.021729    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:30.023186    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:30.023246    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:35.025323    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:35.025346    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-06-17 11:39:21 UTC, ends at Mon 2024-06-17 11:48:43 UTC. --
	Jun 17 11:48:27 running-upgrade-857000 dockerd[2919]: time="2024-06-17T11:48:27.941157101Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/11d782091ec5e4c1886696817fe34adccb12112eefcc57b3a6487ddb5b59eff4 pid=18562 runtime=io.containerd.runc.v2
	Jun 17 11:48:28 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:28Z" level=error msg="ContainerStats resp: {0x40003a3f80 linux}"
	Jun 17 11:48:28 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:28Z" level=error msg="ContainerStats resp: {0x40007ea140 linux}"
	Jun 17 11:48:28 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:28Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 17 11:48:29 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:29Z" level=error msg="ContainerStats resp: {0x40004ff580 linux}"
	Jun 17 11:48:30 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:30Z" level=error msg="ContainerStats resp: {0x400009de00 linux}"
	Jun 17 11:48:30 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:30Z" level=error msg="ContainerStats resp: {0x400067a300 linux}"
	Jun 17 11:48:30 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:30Z" level=error msg="ContainerStats resp: {0x4000884c00 linux}"
	Jun 17 11:48:30 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:30Z" level=error msg="ContainerStats resp: {0x4000885100 linux}"
	Jun 17 11:48:30 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:30Z" level=error msg="ContainerStats resp: {0x40008852c0 linux}"
	Jun 17 11:48:30 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:30Z" level=error msg="ContainerStats resp: {0x4000885cc0 linux}"
	Jun 17 11:48:30 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:30Z" level=error msg="ContainerStats resp: {0x400019c100 linux}"
	Jun 17 11:48:33 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:33Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 17 11:48:38 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 17 11:48:40 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:40Z" level=error msg="ContainerStats resp: {0x40007eb700 linux}"
	Jun 17 11:48:40 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:40Z" level=error msg="ContainerStats resp: {0x400090b880 linux}"
	Jun 17 11:48:41 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:41Z" level=error msg="ContainerStats resp: {0x4000a1d600 linux}"
	Jun 17 11:48:42 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:42Z" level=error msg="ContainerStats resp: {0x40007a0740 linux}"
	Jun 17 11:48:42 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:42Z" level=error msg="ContainerStats resp: {0x40007a0900 linux}"
	Jun 17 11:48:42 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:42Z" level=error msg="ContainerStats resp: {0x40007a0f00 linux}"
	Jun 17 11:48:42 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:42Z" level=error msg="ContainerStats resp: {0x4000884f00 linux}"
	Jun 17 11:48:42 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:42Z" level=error msg="ContainerStats resp: {0x40007a1b00 linux}"
	Jun 17 11:48:42 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:42Z" level=error msg="ContainerStats resp: {0x4000885480 linux}"
	Jun 17 11:48:42 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:42Z" level=error msg="ContainerStats resp: {0x400067a8c0 linux}"
	Jun 17 11:48:43 running-upgrade-857000 cri-dockerd[2762]: time="2024-06-17T11:48:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	11d782091ec5e       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   fb9dc8c2505f4
	f06c5c50311e5       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   5162648a023f0
	7afce939153ee       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   fb9dc8c2505f4
	9841b8b73fc72       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5162648a023f0
	eecf9fc23d8c7       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ec3a4e17e592e
	4e7e41cba40d9       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   80857be48ba88
	8ccd8d10ee88d       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   f1227fb7c15c8
	b453f811aa37f       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   c50ca6d3c2a90
	116b8558a8aba       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   d501d7952f3e4
	5129b5f1d8982       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   56931fcbccb7b
	
	
	==> coredns [11d782091ec5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8445865995847054007.32671098488261764. HINFO: read udp 10.244.0.2:50859->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8445865995847054007.32671098488261764. HINFO: read udp 10.244.0.2:44722->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8445865995847054007.32671098488261764. HINFO: read udp 10.244.0.2:49642->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7afce939153e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:48164->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:36191->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:50408->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:54221->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:41979->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:51252->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:43749->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:33390->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:49871->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3558977195295946539.1083728980446034985. HINFO: read udp 10.244.0.2:59826->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9841b8b73fc7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:42920->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:43321->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:45786->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:59993->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:58151->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:41201->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:53189->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:54130->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:44853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3778390060210870808.7755890320238893055. HINFO: read udp 10.244.0.3:58264->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f06c5c50311e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1179757625918253564.4575432587554953695. HINFO: read udp 10.244.0.3:50548->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1179757625918253564.4575432587554953695. HINFO: read udp 10.244.0.3:42185->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1179757625918253564.4575432587554953695. HINFO: read udp 10.244.0.3:41081->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1179757625918253564.4575432587554953695. HINFO: read udp 10.244.0.3:49695->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-857000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-857000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84fc08e1aa3123a23ee19b25404b578b39fd2f91
	                    minikube.k8s.io/name=running-upgrade-857000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T04_44_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:44:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-857000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:48:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:44:26 +0000   Mon, 17 Jun 2024 11:44:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:44:26 +0000   Mon, 17 Jun 2024 11:44:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:44:26 +0000   Mon, 17 Jun 2024 11:44:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:44:26 +0000   Mon, 17 Jun 2024 11:44:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-857000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 60a414884d0b4d73b17a45c715bccde7
	  System UUID:                60a414884d0b4d73b17a45c715bccde7
	  Boot ID:                    2d524635-979a-4f17-961d-2436ab9946a1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6lxr8                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-grjk4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-857000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-857000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-857000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-4g5hm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-857000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-857000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-857000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-857000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-857000 event: Registered Node running-upgrade-857000 in Controller
	
	
	==> dmesg <==
	[  +1.879303] systemd-fstab-generator[880]: Ignoring "noauto" for root device
	[  +0.081560] systemd-fstab-generator[891]: Ignoring "noauto" for root device
	[  +0.080024] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +1.148666] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.077300] systemd-fstab-generator[1053]: Ignoring "noauto" for root device
	[  +0.086502] systemd-fstab-generator[1064]: Ignoring "noauto" for root device
	[  +2.606040] systemd-fstab-generator[1294]: Ignoring "noauto" for root device
	[Jun17 11:40] systemd-fstab-generator[1997]: Ignoring "noauto" for root device
	[  +2.948082] systemd-fstab-generator[2284]: Ignoring "noauto" for root device
	[  +0.143906] systemd-fstab-generator[2319]: Ignoring "noauto" for root device
	[  +0.091064] systemd-fstab-generator[2330]: Ignoring "noauto" for root device
	[  +0.104304] systemd-fstab-generator[2343]: Ignoring "noauto" for root device
	[  +2.267549] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.109266] systemd-fstab-generator[2719]: Ignoring "noauto" for root device
	[  +0.080329] systemd-fstab-generator[2730]: Ignoring "noauto" for root device
	[  +0.084923] systemd-fstab-generator[2741]: Ignoring "noauto" for root device
	[  +0.081787] systemd-fstab-generator[2755]: Ignoring "noauto" for root device
	[  +2.154430] systemd-fstab-generator[2906]: Ignoring "noauto" for root device
	[  +4.168769] systemd-fstab-generator[3280]: Ignoring "noauto" for root device
	[  +1.198100] systemd-fstab-generator[3574]: Ignoring "noauto" for root device
	[ +18.110045] kauditd_printk_skb: 68 callbacks suppressed
	[Jun17 11:44] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.434942] systemd-fstab-generator[11641]: Ignoring "noauto" for root device
	[  +6.145640] systemd-fstab-generator[12246]: Ignoring "noauto" for root device
	[  +0.455966] systemd-fstab-generator[12382]: Ignoring "noauto" for root device
	
	
	==> etcd [116b8558a8ab] <==
	{"level":"info","ts":"2024-06-17T11:44:21.705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-06-17T11:44:21.705Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-06-17T11:44:21.707Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:44:21.707Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:44:21.707Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:44:21.707Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-17T11:44:21.710Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-857000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:44:22.671Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:44:22.672Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:44:22.672Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:44:22.672Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:44:22.672Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T11:44:22.674Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:44:22.674Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-06-17T11:44:22.678Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:44:22.678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:48:44 up 9 min,  0 users,  load average: 0.09, 0.19, 0.11
	Linux running-upgrade-857000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b453f811aa37] <==
	I0617 11:44:23.871903       1 cache.go:39] Caches are synced for autoregister controller
	I0617 11:44:23.875232       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0617 11:44:23.883781       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0617 11:44:23.883797       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0617 11:44:23.883844       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0617 11:44:23.883941       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0617 11:44:23.883958       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0617 11:44:24.621751       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0617 11:44:24.786731       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0617 11:44:24.789273       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0617 11:44:24.789328       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0617 11:44:24.915696       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 11:44:24.927588       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0617 11:44:24.958806       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0617 11:44:24.961551       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0617 11:44:24.961912       1 controller.go:611] quota admission added evaluator for: endpoints
	I0617 11:44:24.969585       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0617 11:44:25.942611       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0617 11:44:26.643999       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0617 11:44:26.647263       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0617 11:44:26.652262       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0617 11:44:26.703150       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 11:44:39.547296       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0617 11:44:39.698829       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0617 11:44:40.279405       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [8ccd8d10ee88] <==
	I0617 11:44:38.804708       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0617 11:44:38.804740       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0617 11:44:38.804855       1 range_allocator.go:374] Set node running-upgrade-857000 PodCIDR to [10.244.0.0/24]
	I0617 11:44:38.828565       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0617 11:44:38.835056       1 shared_informer.go:262] Caches are synced for attach detach
	I0617 11:44:38.843787       1 shared_informer.go:262] Caches are synced for persistent volume
	I0617 11:44:38.845619       1 shared_informer.go:262] Caches are synced for TTL
	I0617 11:44:38.865489       1 shared_informer.go:262] Caches are synced for taint
	I0617 11:44:38.865530       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0617 11:44:38.865551       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-857000. Assuming now as a timestamp.
	I0617 11:44:38.865570       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0617 11:44:38.865604       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0617 11:44:38.865686       1 event.go:294] "Event occurred" object="running-upgrade-857000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-857000 event: Registered Node running-upgrade-857000 in Controller"
	I0617 11:44:38.892772       1 shared_informer.go:262] Caches are synced for GC
	I0617 11:44:38.892789       1 shared_informer.go:262] Caches are synced for daemon sets
	I0617 11:44:38.932680       1 shared_informer.go:262] Caches are synced for cronjob
	I0617 11:44:38.958791       1 shared_informer.go:262] Caches are synced for resource quota
	I0617 11:44:39.004137       1 shared_informer.go:262] Caches are synced for resource quota
	I0617 11:44:39.415990       1 shared_informer.go:262] Caches are synced for garbage collector
	I0617 11:44:39.447255       1 shared_informer.go:262] Caches are synced for garbage collector
	I0617 11:44:39.447345       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0617 11:44:39.548382       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0617 11:44:39.703314       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4g5hm"
	I0617 11:44:39.797793       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-6lxr8"
	I0617 11:44:39.800295       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-grjk4"
	
	
	==> kube-proxy [eecf9fc23d8c] <==
	I0617 11:44:40.245710       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0617 11:44:40.245811       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0617 11:44:40.245847       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0617 11:44:40.273926       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0617 11:44:40.273935       1 server_others.go:206] "Using iptables Proxier"
	I0617 11:44:40.273947       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0617 11:44:40.274580       1 server.go:661] "Version info" version="v1.24.1"
	I0617 11:44:40.274585       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:44:40.275500       1 config.go:317] "Starting service config controller"
	I0617 11:44:40.275507       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0617 11:44:40.275515       1 config.go:226] "Starting endpoint slice config controller"
	I0617 11:44:40.275516       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0617 11:44:40.278423       1 config.go:444] "Starting node config controller"
	I0617 11:44:40.278435       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0617 11:44:40.376553       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0617 11:44:40.376577       1 shared_informer.go:262] Caches are synced for service config
	I0617 11:44:40.378651       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5129b5f1d898] <==
	W0617 11:44:23.849267       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 11:44:23.849306       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 11:44:23.849338       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0617 11:44:23.849416       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0617 11:44:23.849450       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 11:44:23.849472       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 11:44:23.849509       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 11:44:23.849528       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 11:44:23.849545       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 11:44:23.849555       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0617 11:44:23.849592       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:44:23.849615       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:44:23.849658       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 11:44:23.849681       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 11:44:23.849785       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 11:44:23.849804       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 11:44:24.728107       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 11:44:24.728323       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 11:44:24.740321       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 11:44:24.740347       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0617 11:44:24.787305       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 11:44:24.787388       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0617 11:44:24.800656       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 11:44:24.800675       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0617 11:44:27.247308       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-06-17 11:39:21 UTC, ends at Mon 2024-06-17 11:48:44 UTC. --
	Jun 17 11:44:28 running-upgrade-857000 kubelet[12252]: E0617 11:44:28.475904   12252 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-857000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-857000"
	Jun 17 11:44:28 running-upgrade-857000 kubelet[12252]: E0617 11:44:28.676227   12252 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-857000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-857000"
	Jun 17 11:44:28 running-upgrade-857000 kubelet[12252]: I0617 11:44:28.873898   12252 request.go:601] Waited for 1.144372479s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 17 11:44:28 running-upgrade-857000 kubelet[12252]: E0617 11:44:28.877179   12252 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-857000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-857000"
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: I0617 11:44:38.870328   12252 topology_manager.go:200] "Topology Admit Handler"
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: I0617 11:44:38.873022   12252 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: I0617 11:44:38.873162   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ff638483-8251-4da7-8131-4628de82af24-tmp\") pod \"storage-provisioner\" (UID: \"ff638483-8251-4da7-8131-4628de82af24\") " pod="kube-system/storage-provisioner"
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: I0617 11:44:38.873176   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2bf4\" (UniqueName: \"kubernetes.io/projected/ff638483-8251-4da7-8131-4628de82af24-kube-api-access-l2bf4\") pod \"storage-provisioner\" (UID: \"ff638483-8251-4da7-8131-4628de82af24\") " pod="kube-system/storage-provisioner"
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: I0617 11:44:38.873520   12252 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: E0617 11:44:38.976580   12252 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: E0617 11:44:38.976597   12252 projected.go:192] Error preparing data for projected volume kube-api-access-l2bf4 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jun 17 11:44:38 running-upgrade-857000 kubelet[12252]: E0617 11:44:38.976732   12252 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ff638483-8251-4da7-8131-4628de82af24-kube-api-access-l2bf4 podName:ff638483-8251-4da7-8131-4628de82af24 nodeName:}" failed. No retries permitted until 2024-06-17 11:44:39.476617402 +0000 UTC m=+12.842641866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2bf4" (UniqueName: "kubernetes.io/projected/ff638483-8251-4da7-8131-4628de82af24-kube-api-access-l2bf4") pod "storage-provisioner" (UID: "ff638483-8251-4da7-8131-4628de82af24") : configmap "kube-root-ca.crt" not found
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.706494   12252 topology_manager.go:200] "Topology Admit Handler"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.780344   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80351e93-0a3f-4b69-a219-2b0ae7f31103-xtables-lock\") pod \"kube-proxy-4g5hm\" (UID: \"80351e93-0a3f-4b69-a219-2b0ae7f31103\") " pod="kube-system/kube-proxy-4g5hm"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.780407   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80351e93-0a3f-4b69-a219-2b0ae7f31103-lib-modules\") pod \"kube-proxy-4g5hm\" (UID: \"80351e93-0a3f-4b69-a219-2b0ae7f31103\") " pod="kube-system/kube-proxy-4g5hm"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.780418   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j7zr\" (UniqueName: \"kubernetes.io/projected/80351e93-0a3f-4b69-a219-2b0ae7f31103-kube-api-access-9j7zr\") pod \"kube-proxy-4g5hm\" (UID: \"80351e93-0a3f-4b69-a219-2b0ae7f31103\") " pod="kube-system/kube-proxy-4g5hm"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.780428   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80351e93-0a3f-4b69-a219-2b0ae7f31103-kube-proxy\") pod \"kube-proxy-4g5hm\" (UID: \"80351e93-0a3f-4b69-a219-2b0ae7f31103\") " pod="kube-system/kube-proxy-4g5hm"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.802916   12252 topology_manager.go:200] "Topology Admit Handler"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.804994   12252 topology_manager.go:200] "Topology Admit Handler"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.880542   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/912fc93b-81e7-4f51-bdcd-b637247b1a8b-config-volume\") pod \"coredns-6d4b75cb6d-6lxr8\" (UID: \"912fc93b-81e7-4f51-bdcd-b637247b1a8b\") " pod="kube-system/coredns-6d4b75cb6d-6lxr8"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.880564   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bc30646-7772-48bb-a4b9-f680c615a76d-config-volume\") pod \"coredns-6d4b75cb6d-grjk4\" (UID: \"6bc30646-7772-48bb-a4b9-f680c615a76d\") " pod="kube-system/coredns-6d4b75cb6d-grjk4"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.880593   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97ssr\" (UniqueName: \"kubernetes.io/projected/912fc93b-81e7-4f51-bdcd-b637247b1a8b-kube-api-access-97ssr\") pod \"coredns-6d4b75cb6d-6lxr8\" (UID: \"912fc93b-81e7-4f51-bdcd-b637247b1a8b\") " pod="kube-system/coredns-6d4b75cb6d-6lxr8"
	Jun 17 11:44:39 running-upgrade-857000 kubelet[12252]: I0617 11:44:39.880604   12252 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqggn\" (UniqueName: \"kubernetes.io/projected/6bc30646-7772-48bb-a4b9-f680c615a76d-kube-api-access-xqggn\") pod \"coredns-6d4b75cb6d-grjk4\" (UID: \"6bc30646-7772-48bb-a4b9-f680c615a76d\") " pod="kube-system/coredns-6d4b75cb6d-grjk4"
	Jun 17 11:48:27 running-upgrade-857000 kubelet[12252]: I0617 11:48:27.972699   12252 scope.go:110] "RemoveContainer" containerID="c26f91c53a8c6de065dae8dc18bc83aa528ef6973c30d14d337a22ba4e68784b"
	Jun 17 11:48:27 running-upgrade-857000 kubelet[12252]: I0617 11:48:27.997510   12252 scope.go:110] "RemoveContainer" containerID="5184e943075e38e644c560000e2386c78c79425f4fc3720c96e68b60dff498fe"
	
	
	==> storage-provisioner [4e7e41cba40d] <==
	I0617 11:44:39.686219       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 11:44:39.690302       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 11:44:39.690326       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 11:44:39.693305       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 11:44:39.693505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1c59bc2d-337f-415b-a3b3-3347c2f911f9", APIVersion:"v1", ResourceVersion:"343", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-857000_169af03f-b386-4692-87dd-cd74780f98f3 became leader
	I0617 11:44:39.693575       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-857000_169af03f-b386-4692-87dd-cd74780f98f3!
	I0617 11:44:39.793866       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-857000_169af03f-b386-4692-87dd-cd74780f98f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-857000 -n running-upgrade-857000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-857000 -n running-upgrade-857000: exit status 2 (15.687979208s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-857000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-857000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-857000
--- FAIL: TestRunningBinaryUpgrade (609.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-972000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-972000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.9722895s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-972000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-972000" primary control-plane node in "kubernetes-upgrade-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:41:50.860784    8463 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:41:50.860904    8463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:41:50.860907    8463 out.go:304] Setting ErrFile to fd 2...
	I0617 04:41:50.860910    8463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:41:50.861035    8463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:41:50.862059    8463 out.go:298] Setting JSON to false
	I0617 04:41:50.878562    8463 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4280,"bootTime":1718620230,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:41:50.878633    8463 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:41:50.885045    8463 out.go:177] * [kubernetes-upgrade-972000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:41:50.892949    8463 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:41:50.894360    8463 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:41:50.893023    8463 notify.go:220] Checking for updates...
	I0617 04:41:50.899962    8463 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:41:50.902978    8463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:41:50.906018    8463 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:41:50.909048    8463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:41:50.912377    8463 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:41:50.912450    8463 config.go:182] Loaded profile config "running-upgrade-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:41:50.912499    8463 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:41:50.915912    8463 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:41:50.922989    8463 start.go:297] selected driver: qemu2
	I0617 04:41:50.922995    8463 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:41:50.923003    8463 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:41:50.925109    8463 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:41:50.928940    8463 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:41:50.932802    8463 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 04:41:50.932834    8463 cni.go:84] Creating CNI manager for ""
	I0617 04:41:50.932840    8463 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0617 04:41:50.932879    8463 start.go:340] cluster config:
	{Name:kubernetes-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:41:50.937193    8463 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:41:50.942952    8463 out.go:177] * Starting "kubernetes-upgrade-972000" primary control-plane node in "kubernetes-upgrade-972000" cluster
	I0617 04:41:50.946964    8463 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:41:50.946981    8463 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0617 04:41:50.946989    8463 cache.go:56] Caching tarball of preloaded images
	I0617 04:41:50.947046    8463 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:41:50.947053    8463 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0617 04:41:50.947118    8463 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/kubernetes-upgrade-972000/config.json ...
	I0617 04:41:50.947129    8463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/kubernetes-upgrade-972000/config.json: {Name:mk60b8a7cb3da0b33636eb97ee72c755c1f64d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:41:50.947371    8463 start.go:360] acquireMachinesLock for kubernetes-upgrade-972000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:41:50.947412    8463 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "kubernetes-upgrade-972000"
	I0617 04:41:50.947423    8463 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:41:50.947456    8463 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:41:50.951009    8463 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:41:50.967841    8463 start.go:159] libmachine.API.Create for "kubernetes-upgrade-972000" (driver="qemu2")
	I0617 04:41:50.967871    8463 client.go:168] LocalClient.Create starting
	I0617 04:41:50.967932    8463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:41:50.967962    8463 main.go:141] libmachine: Decoding PEM data...
	I0617 04:41:50.967973    8463 main.go:141] libmachine: Parsing certificate...
	I0617 04:41:50.968016    8463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:41:50.968038    8463 main.go:141] libmachine: Decoding PEM data...
	I0617 04:41:50.968047    8463 main.go:141] libmachine: Parsing certificate...
	I0617 04:41:50.968423    8463 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:41:51.140755    8463 main.go:141] libmachine: Creating SSH key...
	I0617 04:41:51.402419    8463 main.go:141] libmachine: Creating Disk image...
	I0617 04:41:51.402428    8463 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:41:51.402626    8463 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:41:51.415894    8463 main.go:141] libmachine: STDOUT: 
	I0617 04:41:51.415914    8463 main.go:141] libmachine: STDERR: 
	I0617 04:41:51.415971    8463 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2 +20000M
	I0617 04:41:51.427363    8463 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:41:51.427378    8463 main.go:141] libmachine: STDERR: 
	I0617 04:41:51.427397    8463 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:41:51.427402    8463 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:41:51.427435    8463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:37:08:47:5a:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:41:51.429289    8463 main.go:141] libmachine: STDOUT: 
	I0617 04:41:51.429302    8463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:41:51.429323    8463 client.go:171] duration metric: took 461.449792ms to LocalClient.Create
	I0617 04:41:53.431513    8463 start.go:128] duration metric: took 2.484056208s to createHost
	I0617 04:41:53.431575    8463 start.go:83] releasing machines lock for "kubernetes-upgrade-972000", held for 2.484179292s
	W0617 04:41:53.431652    8463 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:41:53.441556    8463 out.go:177] * Deleting "kubernetes-upgrade-972000" in qemu2 ...
	W0617 04:41:53.470648    8463 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:41:53.470674    8463 start.go:728] Will try again in 5 seconds ...
	I0617 04:41:58.472894    8463 start.go:360] acquireMachinesLock for kubernetes-upgrade-972000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:41:58.473387    8463 start.go:364] duration metric: took 387.459µs to acquireMachinesLock for "kubernetes-upgrade-972000"
	I0617 04:41:58.473519    8463 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:41:58.473694    8463 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:41:58.478851    8463 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:41:58.520448    8463 start.go:159] libmachine.API.Create for "kubernetes-upgrade-972000" (driver="qemu2")
	I0617 04:41:58.520492    8463 client.go:168] LocalClient.Create starting
	I0617 04:41:58.520598    8463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:41:58.520659    8463 main.go:141] libmachine: Decoding PEM data...
	I0617 04:41:58.520673    8463 main.go:141] libmachine: Parsing certificate...
	I0617 04:41:58.520745    8463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:41:58.520785    8463 main.go:141] libmachine: Decoding PEM data...
	I0617 04:41:58.520797    8463 main.go:141] libmachine: Parsing certificate...
	I0617 04:41:58.521372    8463 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:41:58.682432    8463 main.go:141] libmachine: Creating SSH key...
	I0617 04:41:58.736575    8463 main.go:141] libmachine: Creating Disk image...
	I0617 04:41:58.736585    8463 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:41:58.736785    8463 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:41:58.749706    8463 main.go:141] libmachine: STDOUT: 
	I0617 04:41:58.749736    8463 main.go:141] libmachine: STDERR: 
	I0617 04:41:58.749786    8463 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2 +20000M
	I0617 04:41:58.761067    8463 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:41:58.761083    8463 main.go:141] libmachine: STDERR: 
	I0617 04:41:58.761097    8463 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:41:58.761103    8463 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:41:58.761141    8463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:b2:9e:0c:b9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:41:58.763042    8463 main.go:141] libmachine: STDOUT: 
	I0617 04:41:58.763061    8463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:41:58.763074    8463 client.go:171] duration metric: took 242.579792ms to LocalClient.Create
	I0617 04:42:00.765219    8463 start.go:128] duration metric: took 2.291507958s to createHost
	I0617 04:42:00.765358    8463 start.go:83] releasing machines lock for "kubernetes-upgrade-972000", held for 2.291977167s
	W0617 04:42:00.765712    8463 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:42:00.774289    8463 out.go:177] 
	W0617 04:42:00.779429    8463 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:42:00.779456    8463 out.go:239] * 
	* 
	W0617 04:42:00.781101    8463 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:42:00.795254    8463 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-972000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-972000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-972000: (3.284948917s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-972000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-972000 status --format={{.Host}}: exit status 7 (33.396167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-972000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-972000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.190534417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-972000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-972000" primary control-plane node in "kubernetes-upgrade-972000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-972000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-972000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:42:04.154763    8499 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:42:04.154914    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:42:04.154918    8499 out.go:304] Setting ErrFile to fd 2...
	I0617 04:42:04.154921    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:42:04.155062    8499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:42:04.156381    8499 out.go:298] Setting JSON to false
	I0617 04:42:04.174746    8499 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4294,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:42:04.174821    8499 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:42:04.179869    8499 out.go:177] * [kubernetes-upgrade-972000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:42:04.186887    8499 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:42:04.186966    8499 notify.go:220] Checking for updates...
	I0617 04:42:04.193822    8499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:42:04.196890    8499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:42:04.199870    8499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:42:04.201170    8499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:42:04.203834    8499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:42:04.207224    8499 config.go:182] Loaded profile config "kubernetes-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0617 04:42:04.207475    8499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:42:04.211678    8499 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:42:04.218853    8499 start.go:297] selected driver: qemu2
	I0617 04:42:04.218858    8499 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:42:04.218909    8499 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:42:04.221211    8499 cni.go:84] Creating CNI manager for ""
	I0617 04:42:04.221228    8499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:42:04.221245    8499 start.go:340] cluster config:
	{Name:kubernetes-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-972000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:42:04.225373    8499 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:42:04.232798    8499 out.go:177] * Starting "kubernetes-upgrade-972000" primary control-plane node in "kubernetes-upgrade-972000" cluster
	I0617 04:42:04.236907    8499 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:42:04.236932    8499 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:42:04.236943    8499 cache.go:56] Caching tarball of preloaded images
	I0617 04:42:04.237022    8499 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:42:04.237028    8499 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:42:04.237083    8499 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/kubernetes-upgrade-972000/config.json ...
	I0617 04:42:04.237442    8499 start.go:360] acquireMachinesLock for kubernetes-upgrade-972000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:42:04.237473    8499 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "kubernetes-upgrade-972000"
	I0617 04:42:04.237481    8499 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:42:04.237487    8499 fix.go:54] fixHost starting: 
	I0617 04:42:04.237597    8499 fix.go:112] recreateIfNeeded on kubernetes-upgrade-972000: state=Stopped err=<nil>
	W0617 04:42:04.237605    8499 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:42:04.241830    8499 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-972000" ...
	I0617 04:42:04.249851    8499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:b2:9e:0c:b9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:42:04.251987    8499 main.go:141] libmachine: STDOUT: 
	I0617 04:42:04.252011    8499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:42:04.252036    8499 fix.go:56] duration metric: took 14.548333ms for fixHost
	I0617 04:42:04.252040    8499 start.go:83] releasing machines lock for "kubernetes-upgrade-972000", held for 14.563417ms
	W0617 04:42:04.252050    8499 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:42:04.252084    8499 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:42:04.252089    8499 start.go:728] Will try again in 5 seconds ...
	I0617 04:42:09.254402    8499 start.go:360] acquireMachinesLock for kubernetes-upgrade-972000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:42:09.254947    8499 start.go:364] duration metric: took 420.333µs to acquireMachinesLock for "kubernetes-upgrade-972000"
	I0617 04:42:09.255034    8499 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:42:09.255055    8499 fix.go:54] fixHost starting: 
	I0617 04:42:09.255796    8499 fix.go:112] recreateIfNeeded on kubernetes-upgrade-972000: state=Stopped err=<nil>
	W0617 04:42:09.255826    8499 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:42:09.265437    8499 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-972000" ...
	I0617 04:42:09.269749    8499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:b2:9e:0c:b9:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubernetes-upgrade-972000/disk.qcow2
	I0617 04:42:09.279733    8499 main.go:141] libmachine: STDOUT: 
	I0617 04:42:09.279810    8499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:42:09.279902    8499 fix.go:56] duration metric: took 24.84925ms for fixHost
	I0617 04:42:09.279921    8499 start.go:83] releasing machines lock for "kubernetes-upgrade-972000", held for 24.949625ms
	W0617 04:42:09.280158    8499 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-972000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-972000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:42:09.288460    8499 out.go:177] 
	W0617 04:42:09.291561    8499 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:42:09.291595    8499 out.go:239] * 
	* 
	W0617 04:42:09.293109    8499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:42:09.303491    8499 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-972000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-972000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-972000 version --output=json: exit status 1 (62.160667ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-972000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-06-17 04:42:09.379499 -0700 PDT m=+955.520604334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-972000 -n kubernetes-upgrade-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-972000 -n kubernetes-upgrade-972000: exit status 7 (31.918958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-972000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-972000
--- FAIL: TestKubernetesUpgrade (18.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.12s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19087
- KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1752994031/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.12s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.23s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19087
- KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4064034044/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (578.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3897630096 start -p stopped-upgrade-767000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3897630096 start -p stopped-upgrade-767000 --memory=2200 --vm-driver=qemu2 : (40.824366875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3897630096 -p stopped-upgrade-767000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3897630096 -p stopped-upgrade-767000 stop: (12.11071475s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-767000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-767000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m45.129127833s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-767000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-767000" primary control-plane node in "stopped-upgrade-767000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-767000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:43:04.465153    8538 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:43:04.465315    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:43:04.465319    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:43:04.465322    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:43:04.465506    8538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:43:04.466703    8538 out.go:298] Setting JSON to false
	I0617 04:43:04.486062    8538 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4354,"bootTime":1718620230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:43:04.486174    8538 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:43:04.491801    8538 out.go:177] * [stopped-upgrade-767000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:43:04.498766    8538 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:43:04.501785    8538 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:43:04.498838    8538 notify.go:220] Checking for updates...
	I0617 04:43:04.509733    8538 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:43:04.512802    8538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:43:04.514196    8538 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:43:04.517693    8538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:43:04.521099    8538 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:43:04.524762    8538 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0617 04:43:04.527795    8538 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:43:04.530766    8538 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:43:04.537713    8538 start.go:297] selected driver: qemu2
	I0617 04:43:04.537718    8538 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:43:04.537772    8538 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:43:04.540403    8538 cni.go:84] Creating CNI manager for ""
	I0617 04:43:04.540419    8538 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:43:04.540450    8538 start.go:340] cluster config:
	{Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:43:04.540503    8538 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:43:04.548735    8538 out.go:177] * Starting "stopped-upgrade-767000" primary control-plane node in "stopped-upgrade-767000" cluster
	I0617 04:43:04.552739    8538 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0617 04:43:04.552752    8538 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0617 04:43:04.552756    8538 cache.go:56] Caching tarball of preloaded images
	I0617 04:43:04.552802    8538 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:43:04.552807    8538 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0617 04:43:04.552861    8538 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/config.json ...
	I0617 04:43:04.553317    8538 start.go:360] acquireMachinesLock for stopped-upgrade-767000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:43:04.553345    8538 start.go:364] duration metric: took 21.667µs to acquireMachinesLock for "stopped-upgrade-767000"
	I0617 04:43:04.553352    8538 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:43:04.553357    8538 fix.go:54] fixHost starting: 
	I0617 04:43:04.553456    8538 fix.go:112] recreateIfNeeded on stopped-upgrade-767000: state=Stopped err=<nil>
	W0617 04:43:04.553466    8538 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:43:04.556812    8538 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-767000" ...
	I0617 04:43:04.564847    8538 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51472-:22,hostfwd=tcp::51473-:2376,hostname=stopped-upgrade-767000 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/disk.qcow2
	I0617 04:43:04.612893    8538 main.go:141] libmachine: STDOUT: 
	I0617 04:43:04.612915    8538 main.go:141] libmachine: STDERR: 
	I0617 04:43:04.612922    8538 main.go:141] libmachine: Waiting for VM to start (ssh -p 51472 docker@127.0.0.1)...
	I0617 04:43:25.288288    8538 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/config.json ...
	I0617 04:43:25.288994    8538 machine.go:94] provisionDockerMachine start ...
	I0617 04:43:25.289192    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.289690    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.289704    8538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 04:43:25.381276    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0617 04:43:25.381307    8538 buildroot.go:166] provisioning hostname "stopped-upgrade-767000"
	I0617 04:43:25.381432    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.381672    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.381689    8538 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-767000 && echo "stopped-upgrade-767000" | sudo tee /etc/hostname
	I0617 04:43:25.459085    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-767000
	
	I0617 04:43:25.459146    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.459271    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.459280    8538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-767000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-767000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-767000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 04:43:25.525208    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 04:43:25.525221    8538 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19087-6045/.minikube CaCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19087-6045/.minikube}
	I0617 04:43:25.525229    8538 buildroot.go:174] setting up certificates
	I0617 04:43:25.525234    8538 provision.go:84] configureAuth start
	I0617 04:43:25.525242    8538 provision.go:143] copyHostCerts
	I0617 04:43:25.525322    8538 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem, removing ...
	I0617 04:43:25.525329    8538 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem
	I0617 04:43:25.525434    8538 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.pem (1078 bytes)
	I0617 04:43:25.525627    8538 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem, removing ...
	I0617 04:43:25.525631    8538 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem
	I0617 04:43:25.525680    8538 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/cert.pem (1123 bytes)
	I0617 04:43:25.525817    8538 exec_runner.go:144] found /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem, removing ...
	I0617 04:43:25.525820    8538 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem
	I0617 04:43:25.525900    8538 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19087-6045/.minikube/key.pem (1679 bytes)
	I0617 04:43:25.526003    8538 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-767000 san=[127.0.0.1 localhost minikube stopped-upgrade-767000]
	I0617 04:43:25.556971    8538 provision.go:177] copyRemoteCerts
	I0617 04:43:25.557004    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 04:43:25.557010    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:43:25.592646    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0617 04:43:25.599455    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0617 04:43:25.605918    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 04:43:25.612976    8538 provision.go:87] duration metric: took 87.729ms to configureAuth
	I0617 04:43:25.612992    8538 buildroot.go:189] setting minikube options for container-runtime
	I0617 04:43:25.613093    8538 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:43:25.613136    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.613229    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.613234    8538 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0617 04:43:25.678679    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0617 04:43:25.678689    8538 buildroot.go:70] root file system type: tmpfs
	I0617 04:43:25.678749    8538 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0617 04:43:25.678801    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.678923    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.678958    8538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0617 04:43:25.747016    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0617 04:43:25.747065    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:25.747189    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:25.747200    8538 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0617 04:43:26.085912    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0617 04:43:26.085924    8538 machine.go:97] duration metric: took 796.928125ms to provisionDockerMachine
	I0617 04:43:26.085930    8538 start.go:293] postStartSetup for "stopped-upgrade-767000" (driver="qemu2")
	I0617 04:43:26.085938    8538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 04:43:26.085988    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 04:43:26.085997    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:43:26.120743    8538 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 04:43:26.122130    8538 info.go:137] Remote host: Buildroot 2021.02.12
	I0617 04:43:26.122137    8538 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19087-6045/.minikube/addons for local assets ...
	I0617 04:43:26.122211    8538 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19087-6045/.minikube/files for local assets ...
	I0617 04:43:26.122329    8538 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem -> 65402.pem in /etc/ssl/certs
	I0617 04:43:26.122459    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 04:43:26.125579    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem --> /etc/ssl/certs/65402.pem (1708 bytes)
	I0617 04:43:26.132415    8538 start.go:296] duration metric: took 46.479959ms for postStartSetup
	I0617 04:43:26.132429    8538 fix.go:56] duration metric: took 21.579294s for fixHost
	I0617 04:43:26.132462    8538 main.go:141] libmachine: Using SSH client type: native
	I0617 04:43:26.132570    8538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101212980] 0x1012151e0 <nil>  [] 0s} localhost 51472 <nil> <nil>}
	I0617 04:43:26.132574    8538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0617 04:43:26.198405    8538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718624606.504962171
	
	I0617 04:43:26.198413    8538 fix.go:216] guest clock: 1718624606.504962171
	I0617 04:43:26.198418    8538 fix.go:229] Guest: 2024-06-17 04:43:26.504962171 -0700 PDT Remote: 2024-06-17 04:43:26.132432 -0700 PDT m=+21.696242043 (delta=372.530171ms)
	I0617 04:43:26.198429    8538 fix.go:200] guest clock delta is within tolerance: 372.530171ms
	I0617 04:43:26.198432    8538 start.go:83] releasing machines lock for "stopped-upgrade-767000", held for 21.645306583s
	I0617 04:43:26.198494    8538 ssh_runner.go:195] Run: cat /version.json
	I0617 04:43:26.198504    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:43:26.198494    8538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 04:43:26.198544    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	W0617 04:43:26.199156    8538 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51472: connect: connection refused
	I0617 04:43:26.199180    8538 retry.go:31] will retry after 303.14945ms: dial tcp [::1]:51472: connect: connection refused
	W0617 04:43:26.231359    8538 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0617 04:43:26.231407    8538 ssh_runner.go:195] Run: systemctl --version
	I0617 04:43:26.233238    8538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0617 04:43:26.235034    8538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0617 04:43:26.235061    8538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0617 04:43:26.237959    8538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0617 04:43:26.242648    8538 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0617 04:43:26.242656    8538 start.go:494] detecting cgroup driver to use...
	I0617 04:43:26.242724    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 04:43:26.249454    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0617 04:43:26.252903    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0617 04:43:26.255737    8538 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0617 04:43:26.255763    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0617 04:43:26.258479    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 04:43:26.261674    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0617 04:43:26.264862    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 04:43:26.267701    8538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 04:43:26.270404    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0617 04:43:26.273674    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0617 04:43:26.277035    8538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0617 04:43:26.280094    8538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 04:43:26.282709    8538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 04:43:26.285887    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:26.355232    8538 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0617 04:43:26.361050    8538 start.go:494] detecting cgroup driver to use...
	I0617 04:43:26.361129    8538 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0617 04:43:26.366592    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 04:43:26.375454    8538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0617 04:43:26.381466    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0617 04:43:26.386085    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 04:43:26.391042    8538 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0617 04:43:26.447253    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 04:43:26.452145    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 04:43:26.457362    8538 ssh_runner.go:195] Run: which cri-dockerd
	I0617 04:43:26.458575    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0617 04:43:26.461276    8538 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0617 04:43:26.466026    8538 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0617 04:43:26.527411    8538 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0617 04:43:26.590926    8538 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0617 04:43:26.590982    8538 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0617 04:43:26.598044    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:26.658406    8538 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0617 04:43:27.799286    8538 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.140870625s)
	I0617 04:43:27.799351    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0617 04:43:27.804065    8538 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0617 04:43:27.809052    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0617 04:43:27.813477    8538 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0617 04:43:27.875557    8538 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0617 04:43:27.935659    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:27.994159    8538 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0617 04:43:28.000396    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0617 04:43:28.004739    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:28.066374    8538 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0617 04:43:28.104621    8538 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0617 04:43:28.104703    8538 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0617 04:43:28.107839    8538 start.go:562] Will wait 60s for crictl version
	I0617 04:43:28.107901    8538 ssh_runner.go:195] Run: which crictl
	I0617 04:43:28.109206    8538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 04:43:28.124384    8538 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0617 04:43:28.124451    8538 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0617 04:43:28.149771    8538 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0617 04:43:28.171020    8538 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0617 04:43:28.171140    8538 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0617 04:43:28.172345    8538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 04:43:28.176055    8538 kubeadm.go:877] updating cluster {Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0617 04:43:28.176109    8538 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0617 04:43:28.176150    8538 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0617 04:43:28.191908    8538 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0617 04:43:28.191917    8538 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0617 04:43:28.191964    8538 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0617 04:43:28.194794    8538 ssh_runner.go:195] Run: which lz4
	I0617 04:43:28.196121    8538 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0617 04:43:28.197332    8538 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0617 04:43:28.197343    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0617 04:43:28.913124    8538 docker.go:649] duration metric: took 717.046875ms to copy over tarball
	I0617 04:43:28.913187    8538 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0617 04:43:30.075857    8538 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162667209s)
	I0617 04:43:30.075872    8538 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0617 04:43:30.091592    8538 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0617 04:43:30.094513    8538 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0617 04:43:30.099514    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:30.158323    8538 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0617 04:43:31.815446    8538 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.657124166s)
	I0617 04:43:31.815536    8538 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0617 04:43:31.826586    8538 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0617 04:43:31.826596    8538 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0617 04:43:31.826601    8538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0617 04:43:31.833249    8538 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:31.833268    8538 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:31.833283    8538 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:31.833329    8538 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:31.833349    8538 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:31.833381    8538 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0617 04:43:31.833473    8538 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:31.833512    8538 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:31.841272    8538 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:31.841337    8538 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0617 04:43:31.841391    8538 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:31.841484    8538 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:31.841666    8538 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:31.841561    8538 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:31.841656    8538 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:31.841873    8538 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.734405    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:32.750426    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:32.767766    8538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0617 04:43:32.767810    8538 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:32.767903    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0617 04:43:32.777615    8538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0617 04:43:32.777645    8538 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:32.777718    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0617 04:43:32.782218    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0617 04:43:32.785510    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0617 04:43:32.795112    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:32.798928    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0617 04:43:32.808161    8538 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0617 04:43:32.808184    8538 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0617 04:43:32.808246    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0617 04:43:32.818192    8538 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0617 04:43:32.818212    8538 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:32.818266    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0617 04:43:32.824026    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0617 04:43:32.824160    8538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0617 04:43:32.831016    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0617 04:43:32.831030    8538 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0617 04:43:32.831045    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0617 04:43:32.837313    8538 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0617 04:43:32.837426    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:32.838839    8538 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0617 04:43:32.838847    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0617 04:43:32.852991    8538 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0617 04:43:32.853013    8538 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:32.853068    8538 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:43:32.875940    8538 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0617 04:43:32.875994    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0617 04:43:32.876091    8538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0617 04:43:32.877433    8538 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0617 04:43:32.877448    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0617 04:43:32.878630    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0617 04:43:32.886574    8538 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0617 04:43:32.886684    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.894687    8538 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:32.907628    8538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0617 04:43:32.907653    8538 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:32.907701    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0617 04:43:32.910272    8538 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0617 04:43:32.910287    8538 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.910322    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0617 04:43:32.914201    8538 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0617 04:43:32.914213    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0617 04:43:32.928050    8538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0617 04:43:32.928071    8538 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:32.928125    8538 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0617 04:43:32.934429    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0617 04:43:32.934466    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0617 04:43:32.934576    8538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0617 04:43:33.170209    8538 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0617 04:43:33.170253    8538 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0617 04:43:33.170266    8538 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0617 04:43:33.170284    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0617 04:43:33.205607    8538 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0617 04:43:33.205621    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0617 04:43:33.249774    8538 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0617 04:43:33.249810    8538 cache_images.go:92] duration metric: took 1.4231945s to LoadCachedImages
	W0617 04:43:33.249854    8538 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0617 04:43:33.249859    8538 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0617 04:43:33.249911    8538 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-767000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 04:43:33.249975    8538 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0617 04:43:33.263731    8538 cni.go:84] Creating CNI manager for ""
	I0617 04:43:33.263742    8538 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:43:33.263747    8538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 04:43:33.263755    8538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-767000 NodeName:stopped-upgrade-767000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 04:43:33.263819    8538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-767000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 04:43:33.263876    8538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0617 04:43:33.266672    8538 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 04:43:33.266696    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 04:43:33.269699    8538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0617 04:43:33.274758    8538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 04:43:33.279549    8538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0617 04:43:33.284735    8538 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0617 04:43:33.285948    8538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 04:43:33.289842    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:43:33.354694    8538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 04:43:33.360309    8538 certs.go:68] Setting up /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000 for IP: 10.0.2.15
	I0617 04:43:33.360316    8538 certs.go:194] generating shared ca certs ...
	I0617 04:43:33.360325    8538 certs.go:226] acquiring lock for ca certs: {Name:mk71e2ea16ce0c468e7dfee6f005765117fbc8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.360494    8538 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.key
	I0617 04:43:33.360543    8538 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.key
	I0617 04:43:33.360549    8538 certs.go:256] generating profile certs ...
	I0617 04:43:33.360620    8538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.key
	I0617 04:43:33.360636    8538 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98
	I0617 04:43:33.360647    8538 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0617 04:43:33.486940    8538 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98 ...
	I0617 04:43:33.486957    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98: {Name:mk7db01f0a717421f7581ec76fcbdd8064ed6750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.487390    8538 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98 ...
	I0617 04:43:33.487409    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98: {Name:mkffc6d40f94ec0c1441a6a597a6004138fbbc94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.487563    8538 certs.go:381] copying /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt.0f160e98 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt
	I0617 04:43:33.487690    8538 certs.go:385] copying /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key.0f160e98 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key
	I0617 04:43:33.487847    8538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/proxy-client.key
	I0617 04:43:33.487996    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540.pem (1338 bytes)
	W0617 04:43:33.488024    8538 certs.go:480] ignoring /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540_empty.pem, impossibly tiny 0 bytes
	I0617 04:43:33.488029    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 04:43:33.488047    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem (1078 bytes)
	I0617 04:43:33.488078    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem (1123 bytes)
	I0617 04:43:33.488095    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/key.pem (1679 bytes)
	I0617 04:43:33.488132    8538 certs.go:484] found cert: /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem (1708 bytes)
	I0617 04:43:33.488899    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 04:43:33.496772    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0617 04:43:33.503849    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 04:43:33.510351    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0617 04:43:33.517811    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 04:43:33.524478    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 04:43:33.531264    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 04:43:33.537951    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 04:43:33.545191    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 04:43:33.551587    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/6540.pem --> /usr/share/ca-certificates/6540.pem (1338 bytes)
	I0617 04:43:33.557991    8538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/ssl/certs/65402.pem --> /usr/share/ca-certificates/65402.pem (1708 bytes)
	I0617 04:43:33.564882    8538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 04:43:33.569874    8538 ssh_runner.go:195] Run: openssl version
	I0617 04:43:33.571831    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65402.pem && ln -fs /usr/share/ca-certificates/65402.pem /etc/ssl/certs/65402.pem"
	I0617 04:43:33.574640    8538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65402.pem
	I0617 04:43:33.575960    8538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 11:27 /usr/share/ca-certificates/65402.pem
	I0617 04:43:33.575979    8538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65402.pem
	I0617 04:43:33.577604    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65402.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 04:43:33.580821    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 04:43:33.583699    8538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:43:33.584943    8538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 11:39 /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:43:33.584962    8538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 04:43:33.586754    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 04:43:33.589926    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6540.pem && ln -fs /usr/share/ca-certificates/6540.pem /etc/ssl/certs/6540.pem"
	I0617 04:43:33.593412    8538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6540.pem
	I0617 04:43:33.594794    8538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 11:27 /usr/share/ca-certificates/6540.pem
	I0617 04:43:33.594813    8538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6540.pem
	I0617 04:43:33.596529    8538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6540.pem /etc/ssl/certs/51391683.0"
	I0617 04:43:33.599477    8538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 04:43:33.600836    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 04:43:33.603491    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 04:43:33.605326    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 04:43:33.607229    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 04:43:33.608833    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 04:43:33.610476    8538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 04:43:33.612351    8538 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0617 04:43:33.612416    8538 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0617 04:43:33.622800    8538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 04:43:33.625885    8538 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 04:43:33.625892    8538 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 04:43:33.625895    8538 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 04:43:33.625919    8538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 04:43:33.629231    8538 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 04:43:33.629541    8538 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-767000" does not appear in /Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:43:33.629636    8538 kubeconfig.go:62] /Users/jenkins/minikube-integration/19087-6045/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-767000" cluster setting kubeconfig missing "stopped-upgrade-767000" context setting]
	I0617 04:43:33.629822    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/kubeconfig: {Name:mk50fd79b579920a7f11ac34f212a8491ceefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:43:33.630241    8538 kapi.go:59] client config for stopped-upgrade-767000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.key", CAFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025a0460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 04:43:33.630563    8538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 04:43:33.633395    8538 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-767000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0617 04:43:33.633401    8538 kubeadm.go:1154] stopping kube-system containers ...
	I0617 04:43:33.633436    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0617 04:43:33.644174    8538 docker.go:483] Stopping containers: [28331efdc258 f5446e1c7e66 388707f1fcc0 293474b3258b c6ee7db29f8d 7a79dd7078e6 4817d393fb9b 853a9dce7b50]
	I0617 04:43:33.644231    8538 ssh_runner.go:195] Run: docker stop 28331efdc258 f5446e1c7e66 388707f1fcc0 293474b3258b c6ee7db29f8d 7a79dd7078e6 4817d393fb9b 853a9dce7b50
	I0617 04:43:33.655003    8538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0617 04:43:33.660677    8538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 04:43:33.663266    8538 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 04:43:33.663272    8538 kubeadm.go:156] found existing configuration files:
	
	I0617 04:43:33.663295    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf
	I0617 04:43:33.665885    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 04:43:33.665910    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 04:43:33.669051    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf
	I0617 04:43:33.671572    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 04:43:33.671592    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 04:43:33.674319    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf
	I0617 04:43:33.677312    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 04:43:33.677353    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 04:43:33.679961    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf
	I0617 04:43:33.682266    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 04:43:33.682287    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 04:43:33.684984    8538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 04:43:33.687511    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:33.708836    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:34.425622    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:34.543052    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:34.563256    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0617 04:43:34.588179    8538 api_server.go:52] waiting for apiserver process to appear ...
	I0617 04:43:34.588267    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:43:35.090436    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:43:35.590326    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:43:35.594415    8538 api_server.go:72] duration metric: took 1.006249708s to wait for apiserver process to appear ...
	I0617 04:43:35.594425    8538 api_server.go:88] waiting for apiserver healthz status ...
	I0617 04:43:35.594433    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:40.596205    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:40.596257    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:45.596566    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:45.596620    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:50.597381    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:50.597427    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:43:55.598024    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:43:55.598071    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:00.598957    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:00.599042    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:05.600956    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:05.601010    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:10.602647    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:10.602675    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:15.604509    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:15.604546    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:20.606774    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:20.606803    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:25.608940    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:25.608960    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:30.611086    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:30.611105    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:35.613265    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:35.613478    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:35.634851    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:35.634936    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:35.648441    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:35.648516    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:35.660524    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:35.660595    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:35.671074    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:35.671159    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:35.687894    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:35.687959    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:35.698712    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:35.698777    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:35.708399    8538 logs.go:276] 0 containers: []
	W0617 04:44:35.708411    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:35.708472    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:35.718947    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:35.718965    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:35.718973    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:35.734035    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:35.734050    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:35.746755    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:35.746765    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:35.757569    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:35.757579    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:35.769564    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:35.769582    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:35.774300    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:35.774306    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:35.786099    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:35.786112    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:35.811295    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:35.811305    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:35.827339    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:35.827349    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:35.844602    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:35.844616    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:35.861761    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:35.861771    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:35.963549    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:35.963563    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:35.976350    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:35.976360    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:36.004971    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:36.004989    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:36.021736    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:36.021748    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:36.036073    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:36.036086    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:38.563857    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:43.564290    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:43.564563    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:43.587689    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:43.587812    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:43.610139    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:43.610212    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:43.622696    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:43.622762    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:43.633416    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:43.633500    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:43.644067    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:43.644138    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:43.654896    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:43.654965    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:43.665250    8538 logs.go:276] 0 containers: []
	W0617 04:44:43.665262    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:43.665322    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:43.675861    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:43.675880    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:43.675885    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:43.697632    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:43.697646    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:43.713310    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:43.713324    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:43.730476    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:43.730487    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:43.747788    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:43.747799    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:43.758854    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:43.758864    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:43.785103    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:43.785115    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:43.796512    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:43.796523    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:43.825202    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:43.825211    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:43.839153    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:43.839163    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:43.876898    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:43.876909    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:43.889430    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:43.889439    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:43.902927    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:43.902936    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:43.917229    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:43.917241    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:43.928515    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:43.928525    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:43.941059    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:43.941069    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:46.446568    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:51.448018    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:51.448247    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:51.467932    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:51.468026    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:51.479152    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:51.479223    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:51.489924    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:51.489992    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:51.500851    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:51.500924    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:51.511386    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:51.511442    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:51.521775    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:51.521850    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:51.532094    8538 logs.go:276] 0 containers: []
	W0617 04:44:51.532107    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:51.532167    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:51.542229    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:51.542245    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:51.542250    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:51.570483    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:51.570493    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:51.605458    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:51.605471    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:51.619403    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:51.619413    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:51.631804    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:51.631818    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:51.646246    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:51.646259    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:51.661633    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:51.661645    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:51.675720    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:51.675735    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:44:51.701467    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:51.701476    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:51.715562    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:51.715573    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:51.737245    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:51.737259    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:51.754401    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:51.754412    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:51.766035    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:51.766048    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:51.770660    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:51.770665    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:51.785693    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:51.785706    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:51.797254    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:51.797265    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:54.322670    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:44:59.325006    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:44:59.325119    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:44:59.336685    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:44:59.336752    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:44:59.347203    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:44:59.347274    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:44:59.357779    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:44:59.357847    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:44:59.368533    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:44:59.368600    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:44:59.378922    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:44:59.378994    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:44:59.389211    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:44:59.389276    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:44:59.401520    8538 logs.go:276] 0 containers: []
	W0617 04:44:59.401532    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:44:59.401586    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:44:59.411515    8538 logs.go:276] 1 containers: [0938f605d529]
	I0617 04:44:59.411538    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:44:59.411544    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:44:59.425594    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:44:59.425607    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:44:59.439660    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:44:59.439671    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:44:59.461123    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:44:59.461135    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:44:59.490031    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:44:59.490042    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:44:59.505453    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:44:59.505466    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:44:59.519287    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:44:59.519299    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:44:59.530617    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:44:59.530628    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:44:59.541956    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:44:59.541968    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:44:59.553921    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:44:59.553935    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:44:59.558123    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:44:59.558133    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:44:59.592794    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:44:59.592808    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:44:59.608765    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:44:59.608778    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:44:59.620297    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:44:59.620308    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:44:59.637115    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:44:59.637125    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:44:59.655084    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:44:59.655096    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:02.183362    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:07.185606    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:07.185927    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:07.222261    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:07.222401    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:07.243720    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:07.243829    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:07.258048    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:07.258136    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:07.273302    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:07.273371    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:07.284026    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:07.284084    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:07.294801    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:07.294870    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:07.307522    8538 logs.go:276] 0 containers: []
	W0617 04:45:07.307533    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:07.307590    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:07.318303    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:07.318321    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:07.318326    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:07.322820    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:07.322826    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:07.378671    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:07.378688    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:07.404037    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:07.404051    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:07.419823    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:07.419834    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:07.431538    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:07.431548    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:07.442403    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:07.442415    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:07.455807    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:07.455818    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:07.470418    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:07.470430    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:07.482046    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:07.482056    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:07.504270    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:07.504284    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:07.515943    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:07.515959    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:07.541116    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:07.541124    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:07.568691    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:07.568703    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:07.582457    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:07.582470    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:07.601982    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:07.601995    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:07.623686    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:07.623696    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:10.138108    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:15.140369    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:15.140516    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:15.152898    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:15.152968    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:15.163402    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:15.163471    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:15.174940    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:15.175007    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:15.185676    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:15.185743    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:15.196352    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:15.196418    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:15.207000    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:15.207074    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:15.217614    8538 logs.go:276] 0 containers: []
	W0617 04:45:15.217633    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:15.217688    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:15.232902    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:15.232919    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:15.232924    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:15.260523    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:15.260531    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:15.272834    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:15.272845    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:15.285892    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:15.285903    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:15.300158    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:15.300170    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:15.311691    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:15.311703    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:15.328554    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:15.328564    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:15.339705    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:15.339714    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:15.363237    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:15.363245    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:15.367065    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:15.367074    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:15.402066    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:15.402080    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:15.416831    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:15.416840    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:15.430852    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:15.430861    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:15.442123    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:15.442137    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:15.463265    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:15.463275    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:15.478701    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:15.478715    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:15.496010    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:15.496021    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:18.009027    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:23.011302    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:23.011436    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:23.030136    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:23.030232    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:23.044598    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:23.044671    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:23.056429    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:23.056500    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:23.067125    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:23.067193    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:23.077542    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:23.077612    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:23.088214    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:23.088287    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:23.098806    8538 logs.go:276] 0 containers: []
	W0617 04:45:23.098818    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:23.098876    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:23.109292    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:23.109312    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:23.109319    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:23.143671    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:23.143686    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:23.164695    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:23.164709    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:23.182483    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:23.182493    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:23.200020    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:23.200033    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:23.210790    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:23.210801    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:23.226425    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:23.226437    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:23.244000    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:23.244010    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:23.255768    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:23.255781    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:23.272626    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:23.272640    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:23.301616    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:23.301628    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:23.305904    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:23.305911    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:23.346637    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:23.346656    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:23.363076    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:23.363088    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:23.387314    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:23.387323    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:23.398981    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:23.398995    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:23.409629    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:23.409640    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:25.923354    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:30.925594    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:30.925733    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:30.943506    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:30.943577    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:30.955378    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:30.955446    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:30.966273    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:30.966335    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:30.976364    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:30.976427    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:30.986289    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:30.986380    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:30.996658    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:30.996724    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:31.007418    8538 logs.go:276] 0 containers: []
	W0617 04:45:31.007429    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:31.007487    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:31.018039    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:31.018057    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:31.018063    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:31.030893    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:31.030905    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:31.035168    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:31.035174    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:31.048983    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:31.048994    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:31.063088    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:31.063098    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:31.076923    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:31.076934    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:31.096683    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:31.096694    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:31.114239    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:31.114251    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:31.139095    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:31.139106    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:31.167137    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:31.167147    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:31.178627    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:31.178638    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:31.214740    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:31.214750    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:31.235920    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:31.235930    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:31.247785    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:31.247798    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:31.268227    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:31.268237    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:31.279937    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:31.279949    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:31.291296    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:31.291309    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:33.806371    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:38.808708    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:38.808871    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:38.822324    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:38.822407    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:38.834656    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:38.834725    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:38.844895    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:38.844955    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:38.855289    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:38.855365    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:38.866024    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:38.866085    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:38.876347    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:38.876404    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:38.886673    8538 logs.go:276] 0 containers: []
	W0617 04:45:38.886685    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:38.886746    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:38.902401    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:38.902418    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:38.902424    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:38.913915    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:38.913927    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:38.928145    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:38.928158    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:38.945271    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:38.945282    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:38.957178    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:38.957188    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:38.975265    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:38.975275    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:39.012153    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:39.012166    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:39.030243    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:39.030260    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:39.059802    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:39.059828    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:39.087622    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:39.087637    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:39.091780    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:39.091787    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:39.110437    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:39.110448    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:39.123226    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:39.123236    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:39.138956    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:39.138970    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:39.153328    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:39.153343    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:39.176509    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:39.176522    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:39.188109    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:39.188121    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:41.701254    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:46.703545    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:46.703657    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:46.721148    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:46.721226    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:46.733671    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:46.733736    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:46.743592    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:46.743686    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:46.754224    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:46.754292    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:46.764181    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:46.764245    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:46.774614    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:46.774682    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:46.785130    8538 logs.go:276] 0 containers: []
	W0617 04:45:46.785145    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:46.785202    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:46.796384    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:46.796408    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:46.796414    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:46.812265    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:46.812279    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:46.837526    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:46.837534    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:46.851749    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:46.851764    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:46.870272    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:46.870286    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:46.882772    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:46.882787    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:46.894698    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:46.894713    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:46.915082    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:46.915096    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:46.930035    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:46.930050    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:46.958524    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:46.958533    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:46.997265    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:46.997281    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:47.014697    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:47.014711    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:47.026078    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:47.026092    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:47.042945    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:47.042958    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:47.047318    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:47.047323    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:47.058735    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:47.058751    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:47.072420    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:47.072434    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:49.591308    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:45:54.593649    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:45:54.593815    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:45:54.611483    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:45:54.611575    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:45:54.625122    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:45:54.625188    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:45:54.637713    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:45:54.637779    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:45:54.648052    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:45:54.648127    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:45:54.658503    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:45:54.658566    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:45:54.669430    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:45:54.669501    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:45:54.680021    8538 logs.go:276] 0 containers: []
	W0617 04:45:54.680034    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:45:54.680098    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:45:54.693562    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:45:54.693581    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:45:54.693586    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:45:54.728368    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:45:54.728381    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:45:54.749911    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:45:54.749921    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:45:54.761427    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:45:54.761438    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:45:54.786580    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:45:54.786587    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:45:54.815277    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:45:54.815287    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:45:54.829143    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:45:54.829154    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:45:54.842692    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:45:54.842702    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:45:54.856798    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:45:54.856808    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:45:54.871633    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:45:54.871642    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:45:54.888899    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:45:54.888907    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:45:54.900620    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:45:54.900631    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:45:54.918014    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:45:54.918030    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:45:54.931194    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:45:54.931205    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:45:54.935407    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:45:54.935415    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:45:54.948368    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:45:54.948379    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:45:54.959798    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:45:54.959810    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:45:57.474727    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:02.477107    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:02.477312    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:02.498505    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:02.498605    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:02.513541    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:02.513624    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:02.525104    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:02.525176    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:02.535880    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:02.535952    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:02.547557    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:02.547622    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:02.557776    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:02.557846    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:02.567618    8538 logs.go:276] 0 containers: []
	W0617 04:46:02.567630    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:02.567692    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:02.578121    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:02.578138    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:02.578144    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:02.592591    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:02.592604    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:02.604919    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:02.604930    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:02.615942    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:02.615954    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:02.627994    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:02.628003    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:02.632012    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:02.632018    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:02.647730    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:02.647743    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:02.659541    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:02.659553    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:02.683933    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:02.683948    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:02.712179    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:02.712188    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:02.725605    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:02.725617    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:02.743268    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:02.743282    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:02.759994    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:02.760007    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:02.776640    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:02.776655    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:02.817390    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:02.817401    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:02.831684    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:02.831694    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:02.843638    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:02.843651    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:05.371066    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:10.373301    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:10.373498    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:10.389619    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:10.389699    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:10.401778    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:10.401847    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:10.412513    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:10.412579    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:10.422910    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:10.422978    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:10.433347    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:10.433420    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:10.444148    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:10.444213    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:10.454517    8538 logs.go:276] 0 containers: []
	W0617 04:46:10.454528    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:10.454582    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:10.465391    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:10.465407    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:10.465412    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:10.482238    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:10.482249    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:10.493657    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:10.493667    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:10.514108    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:10.514118    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:10.537656    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:10.537669    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:10.549793    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:10.549805    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:10.575033    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:10.575042    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:10.591985    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:10.591999    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:10.609038    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:10.609050    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:10.627731    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:10.627742    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:10.639577    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:10.639589    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:10.652789    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:10.652799    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:10.666843    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:10.666854    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:10.697151    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:10.697164    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:10.732782    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:10.732793    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:10.747007    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:10.747016    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:10.751852    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:10.751859    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:13.268642    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:18.271208    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:18.271444    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:18.292115    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:18.292211    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:18.307523    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:18.307601    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:18.319883    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:18.319956    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:18.330973    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:18.331040    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:18.341430    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:18.341503    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:18.352445    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:18.352507    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:18.366927    8538 logs.go:276] 0 containers: []
	W0617 04:46:18.366939    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:18.366998    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:18.377526    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:18.377543    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:18.377550    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:18.407540    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:18.407551    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:18.428184    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:18.428195    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:18.440329    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:18.440344    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:18.452352    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:18.452363    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:18.486768    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:18.486782    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:18.502092    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:18.502104    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:18.519173    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:18.519186    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:18.529822    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:18.529834    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:18.547700    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:18.547710    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:18.562073    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:18.562089    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:18.566720    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:18.566729    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:18.579517    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:18.579527    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:18.595702    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:18.595715    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:18.607060    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:18.607075    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:18.624483    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:18.624494    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:18.635783    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:18.635795    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:21.162708    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:26.165077    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:26.165391    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:26.195250    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:26.195370    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:26.213045    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:26.213128    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:26.233015    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:26.233094    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:26.247985    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:26.248056    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:26.264424    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:26.264490    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:26.275409    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:26.275479    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:26.285400    8538 logs.go:276] 0 containers: []
	W0617 04:46:26.285411    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:26.285467    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:26.295769    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:26.295786    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:26.295791    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:26.319269    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:26.319281    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:26.333192    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:26.333203    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:26.344613    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:26.344623    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:26.355904    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:26.355914    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:26.376356    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:26.376369    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:26.391672    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:26.391683    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:26.408942    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:26.408953    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:26.420280    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:26.420290    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:26.432660    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:26.432674    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:26.447986    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:26.447997    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:26.466117    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:26.466127    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:26.477759    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:26.477773    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:26.489507    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:26.489518    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:26.519285    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:26.519296    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:26.523581    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:26.523587    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:26.558632    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:26.558643    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:29.075661    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:34.077827    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:34.077981    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:34.095605    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:34.095697    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:34.108892    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:34.108961    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:34.119500    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:34.119558    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:34.129829    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:34.129903    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:34.140628    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:34.140694    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:34.151109    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:34.151175    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:34.164430    8538 logs.go:276] 0 containers: []
	W0617 04:46:34.164442    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:34.164497    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:34.175211    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:34.175235    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:34.175240    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:34.198446    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:34.198454    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:34.211256    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:34.211268    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:34.227963    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:34.227973    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:34.262792    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:34.262805    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:34.276673    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:34.276684    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:34.288234    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:34.288246    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:34.311456    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:34.311466    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:34.316193    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:34.316199    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:34.329299    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:34.329309    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:34.340869    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:34.340880    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:34.351895    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:34.351910    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:34.380687    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:34.380697    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:34.395035    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:34.395046    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:34.416149    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:34.416163    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:34.432220    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:34.432230    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:34.444055    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:34.444066    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:36.966106    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:41.968311    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:41.968539    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:41.992413    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:41.992537    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:42.009675    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:42.009751    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:42.022791    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:42.022868    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:42.036807    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:42.036877    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:42.047546    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:42.047616    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:42.061260    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:42.061327    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:42.071288    8538 logs.go:276] 0 containers: []
	W0617 04:46:42.071301    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:42.071360    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:42.085206    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:42.085227    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:42.085232    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:42.105711    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:42.105723    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:42.123656    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:42.123667    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:42.152311    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:42.152321    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:42.191541    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:42.191555    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:42.204289    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:42.204300    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:42.216386    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:42.216399    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:42.227534    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:42.227548    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:42.244695    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:42.244705    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:42.257194    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:42.257206    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:42.269074    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:42.269087    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:42.273327    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:42.273335    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:42.287357    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:42.287369    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:42.301781    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:42.301799    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:42.326010    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:42.326032    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:42.340020    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:42.340032    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:42.360968    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:42.360980    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:44.874957    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:49.877182    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:49.877328    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:49.890515    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:49.890590    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:49.901822    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:49.901895    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:49.911991    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:49.912057    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:49.922498    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:49.922576    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:49.933370    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:49.933432    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:49.944070    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:49.944145    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:49.954327    8538 logs.go:276] 0 containers: []
	W0617 04:46:49.954339    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:49.954398    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:49.965012    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:49.965030    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:49.965036    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:50.000006    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:50.000017    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:46:50.015886    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:50.015895    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:50.028891    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:50.028904    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:50.042662    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:50.042674    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:50.063124    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:50.063135    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:50.085642    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:50.085655    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:50.107104    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:50.107115    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:50.124312    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:50.124323    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:50.141606    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:50.141616    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:50.171357    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:50.171372    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:50.175899    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:50.175910    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:50.190416    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:50.190426    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:50.202537    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:50.202548    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:50.218792    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:50.218804    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:50.241951    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:50.241961    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:50.256213    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:50.256224    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:52.770314    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:46:57.772758    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:46:57.772987    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:46:57.796370    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:46:57.796482    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:46:57.813084    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:46:57.813164    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:46:57.826230    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:46:57.826307    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:46:57.837236    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:46:57.837307    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:46:57.847656    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:46:57.847722    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:46:57.858239    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:46:57.858308    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:46:57.872801    8538 logs.go:276] 0 containers: []
	W0617 04:46:57.872813    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:46:57.872870    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:46:57.883121    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:46:57.883140    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:46:57.883146    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:46:57.918890    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:46:57.918900    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:46:57.931745    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:46:57.931754    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:46:57.942298    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:46:57.942311    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:46:57.971464    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:46:57.971473    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:46:57.975681    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:46:57.975689    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:46:57.987058    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:46:57.987067    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:46:58.004163    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:46:58.004174    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:46:58.015948    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:46:58.015959    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:46:58.027556    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:46:58.027568    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:46:58.040024    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:46:58.040035    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:46:58.057906    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:46:58.057916    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:46:58.075676    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:46:58.075687    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:46:58.099820    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:46:58.099828    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:46:58.113811    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:46:58.113821    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:46:58.128007    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:46:58.128017    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:46:58.154012    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:46:58.154027    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:00.673504    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:05.675721    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:05.675820    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:05.686404    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:05.686478    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:05.696805    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:05.696881    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:05.707404    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:05.707476    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:05.717443    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:05.717515    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:05.727993    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:05.728058    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:05.738871    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:05.738938    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:05.749000    8538 logs.go:276] 0 containers: []
	W0617 04:47:05.749013    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:05.749076    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:05.759914    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:05.759932    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:05.759937    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:05.797026    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:05.797039    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:05.811409    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:05.811420    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:05.822989    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:05.823003    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:05.851628    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:05.851642    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:05.865396    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:05.865407    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:05.877988    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:05.877998    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:05.889008    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:05.889023    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:05.907085    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:05.907095    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:05.921014    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:05.921023    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:05.936702    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:05.936712    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:05.948357    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:05.948370    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:05.965708    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:05.965723    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:05.977330    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:05.977341    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:05.991078    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:05.991088    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:05.995116    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:05.995124    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:06.016128    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:06.016139    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:08.542104    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:13.544715    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:13.545109    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:13.578840    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:13.578989    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:13.606642    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:13.606727    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:13.619828    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:13.619907    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:13.631718    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:13.631792    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:13.643094    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:13.643163    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:13.655519    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:13.655589    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:13.665775    8538 logs.go:276] 0 containers: []
	W0617 04:47:13.665785    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:13.665843    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:13.676199    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:13.676218    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:13.676224    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:13.711098    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:13.711109    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:13.724064    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:13.724073    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:13.734772    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:13.734785    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:13.750382    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:13.750395    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:13.763236    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:13.763248    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:13.784047    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:13.784058    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:13.800894    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:13.800905    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:13.828763    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:13.828771    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:13.842802    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:13.842814    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:13.854550    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:13.854560    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:13.872108    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:13.872118    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:13.883342    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:13.883351    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:13.907319    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:13.907327    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:13.920053    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:13.920064    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:13.924641    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:13.924647    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:13.938806    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:13.938818    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:16.456332    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:21.457040    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:21.457348    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:21.493602    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:21.493715    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:21.509993    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:21.510078    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:21.522514    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:21.522592    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:21.537925    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:21.537998    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:21.548650    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:21.548720    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:21.559359    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:21.559426    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:21.570196    8538 logs.go:276] 0 containers: []
	W0617 04:47:21.570207    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:21.570258    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:21.581669    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:21.581686    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:21.581691    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:21.593312    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:21.593324    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:21.614590    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:21.614602    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:21.637824    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:21.637837    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:21.665273    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:21.665285    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:21.679992    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:21.680003    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:21.700638    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:21.700650    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:21.713085    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:21.713096    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:21.746857    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:21.746869    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:21.759982    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:21.759996    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:21.777476    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:21.777490    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:21.788487    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:21.788501    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:21.802687    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:21.802697    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:21.814541    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:21.814555    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:21.831627    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:21.831642    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:21.842685    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:21.842696    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:21.847286    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:21.847293    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:24.364306    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:29.366487    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:29.366618    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:47:29.381192    8538 logs.go:276] 2 containers: [822cea388d1a 4b8c612a132a]
	I0617 04:47:29.381284    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:47:29.393759    8538 logs.go:276] 2 containers: [6db51044b440 f5446e1c7e66]
	I0617 04:47:29.393822    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:47:29.407831    8538 logs.go:276] 1 containers: [ef70feb4aeee]
	I0617 04:47:29.407893    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:47:29.418422    8538 logs.go:276] 2 containers: [761b4578015c 28331efdc258]
	I0617 04:47:29.418485    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:47:29.433825    8538 logs.go:276] 1 containers: [d960a7f4963e]
	I0617 04:47:29.433902    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:47:29.444248    8538 logs.go:276] 2 containers: [641f21966ab8 df93b9767b63]
	I0617 04:47:29.444314    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:47:29.454504    8538 logs.go:276] 0 containers: []
	W0617 04:47:29.454519    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:47:29.454584    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:47:29.464745    8538 logs.go:276] 2 containers: [98f6897c7602 0938f605d529]
	I0617 04:47:29.468570    8538 logs.go:123] Gathering logs for kube-controller-manager [df93b9767b63] ...
	I0617 04:47:29.468582    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df93b9767b63"
	I0617 04:47:29.486749    8538 logs.go:123] Gathering logs for storage-provisioner [0938f605d529] ...
	I0617 04:47:29.486760    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0938f605d529"
	I0617 04:47:29.498583    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:47:29.498597    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:47:29.521028    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:47:29.521034    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:47:29.554738    8538 logs.go:123] Gathering logs for kube-apiserver [822cea388d1a] ...
	I0617 04:47:29.554750    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822cea388d1a"
	I0617 04:47:29.568943    8538 logs.go:123] Gathering logs for kube-scheduler [761b4578015c] ...
	I0617 04:47:29.568954    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761b4578015c"
	I0617 04:47:29.591685    8538 logs.go:123] Gathering logs for kube-proxy [d960a7f4963e] ...
	I0617 04:47:29.591697    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d960a7f4963e"
	I0617 04:47:29.603796    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:47:29.603806    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:47:29.615376    8538 logs.go:123] Gathering logs for etcd [f5446e1c7e66] ...
	I0617 04:47:29.615388    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5446e1c7e66"
	I0617 04:47:29.630524    8538 logs.go:123] Gathering logs for coredns [ef70feb4aeee] ...
	I0617 04:47:29.630534    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef70feb4aeee"
	I0617 04:47:29.641724    8538 logs.go:123] Gathering logs for kube-scheduler [28331efdc258] ...
	I0617 04:47:29.641734    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28331efdc258"
	I0617 04:47:29.657264    8538 logs.go:123] Gathering logs for storage-provisioner [98f6897c7602] ...
	I0617 04:47:29.657276    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f6897c7602"
	I0617 04:47:29.669601    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:47:29.669616    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:47:29.673907    8538 logs.go:123] Gathering logs for kube-controller-manager [641f21966ab8] ...
	I0617 04:47:29.673916    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641f21966ab8"
	I0617 04:47:29.696970    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:47:29.696982    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0617 04:47:29.725815    8538 logs.go:123] Gathering logs for kube-apiserver [4b8c612a132a] ...
	I0617 04:47:29.725834    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8c612a132a"
	I0617 04:47:29.740005    8538 logs.go:123] Gathering logs for etcd [6db51044b440] ...
	I0617 04:47:29.740016    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6db51044b440"
	I0617 04:47:32.256082    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:37.257405    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:37.257482    8538 kubeadm.go:591] duration metric: took 4m3.634094791s to restartPrimaryControlPlane
	W0617 04:47:37.257545    8538 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0617 04:47:37.257575    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0617 04:47:38.231642    8538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 04:47:38.236705    8538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 04:47:38.239467    8538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 04:47:38.242286    8538 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 04:47:38.242293    8538 kubeadm.go:156] found existing configuration files:
	
	I0617 04:47:38.242315    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf
	I0617 04:47:38.245049    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 04:47:38.245074    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 04:47:38.247594    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf
	I0617 04:47:38.250521    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 04:47:38.250544    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 04:47:38.253835    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf
	I0617 04:47:38.256378    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 04:47:38.256401    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 04:47:38.258917    8538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf
	I0617 04:47:38.262080    8538 kubeadm.go:162] "https://control-plane.minikube.internal:51507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 04:47:38.262105    8538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 04:47:38.266879    8538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0617 04:47:38.284575    8538 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0617 04:47:38.284640    8538 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 04:47:38.334908    8538 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 04:47:38.334971    8538 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 04:47:38.335025    8538 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 04:47:38.384440    8538 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 04:47:38.392525    8538 out.go:204]   - Generating certificates and keys ...
	I0617 04:47:38.392557    8538 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 04:47:38.392590    8538 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 04:47:38.392628    8538 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0617 04:47:38.392661    8538 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0617 04:47:38.392702    8538 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0617 04:47:38.392737    8538 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0617 04:47:38.392774    8538 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0617 04:47:38.392810    8538 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0617 04:47:38.392852    8538 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0617 04:47:38.392892    8538 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0617 04:47:38.392910    8538 kubeadm.go:309] [certs] Using the existing "sa" key
	I0617 04:47:38.392940    8538 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 04:47:38.515362    8538 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 04:47:38.644858    8538 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 04:47:38.720688    8538 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 04:47:38.842351    8538 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 04:47:38.870997    8538 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 04:47:38.871441    8538 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 04:47:38.871466    8538 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 04:47:38.937191    8538 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 04:47:38.941356    8538 out.go:204]   - Booting up control plane ...
	I0617 04:47:38.941545    8538 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 04:47:38.941646    8538 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 04:47:38.941702    8538 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 04:47:38.941741    8538 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 04:47:38.941845    8538 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0617 04:47:43.440624    8538 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501282 seconds
	I0617 04:47:43.440795    8538 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 04:47:43.446014    8538 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 04:47:43.955056    8538 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 04:47:43.955179    8538 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-767000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 04:47:44.459918    8538 kubeadm.go:309] [bootstrap-token] Using token: 3k16i9.lt87x78crfyjzuv5
	I0617 04:47:44.464216    8538 out.go:204]   - Configuring RBAC rules ...
	I0617 04:47:44.464263    8538 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 04:47:44.474093    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 04:47:44.476065    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 04:47:44.477990    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 04:47:44.479300    8538 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 04:47:44.481417    8538 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 04:47:44.485343    8538 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 04:47:44.635917    8538 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 04:47:44.863679    8538 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 04:47:44.864176    8538 kubeadm.go:309] 
	I0617 04:47:44.864205    8538 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 04:47:44.864207    8538 kubeadm.go:309] 
	I0617 04:47:44.864240    8538 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 04:47:44.864248    8538 kubeadm.go:309] 
	I0617 04:47:44.864266    8538 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 04:47:44.864298    8538 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 04:47:44.864348    8538 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 04:47:44.864351    8538 kubeadm.go:309] 
	I0617 04:47:44.864382    8538 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 04:47:44.864390    8538 kubeadm.go:309] 
	I0617 04:47:44.864418    8538 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 04:47:44.864422    8538 kubeadm.go:309] 
	I0617 04:47:44.864446    8538 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 04:47:44.864496    8538 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 04:47:44.864559    8538 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 04:47:44.864563    8538 kubeadm.go:309] 
	I0617 04:47:44.864605    8538 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 04:47:44.864688    8538 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 04:47:44.864693    8538 kubeadm.go:309] 
	I0617 04:47:44.864734    8538 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3k16i9.lt87x78crfyjzuv5 \
	I0617 04:47:44.864783    8538 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb \
	I0617 04:47:44.864795    8538 kubeadm.go:309] 	--control-plane 
	I0617 04:47:44.864799    8538 kubeadm.go:309] 
	I0617 04:47:44.864839    8538 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 04:47:44.864844    8538 kubeadm.go:309] 
	I0617 04:47:44.864882    8538 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3k16i9.lt87x78crfyjzuv5 \
	I0617 04:47:44.864952    8538 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ba62ea1b3e08ca4763f16658e0972aba486d1e9fb043a95882c5969d25820fbb 
	I0617 04:47:44.865264    8538 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 04:47:44.865275    8538 cni.go:84] Creating CNI manager for ""
	I0617 04:47:44.865283    8538 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:47:44.869086    8538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0617 04:47:44.872896    8538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0617 04:47:44.875899    8538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0617 04:47:44.881425    8538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 04:47:44.881496    8538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-767000 minikube.k8s.io/updated_at=2024_06_17T04_47_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=84fc08e1aa3123a23ee19b25404b578b39fd2f91 minikube.k8s.io/name=stopped-upgrade-767000 minikube.k8s.io/primary=true
	I0617 04:47:44.881499    8538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 04:47:44.914264    8538 kubeadm.go:1107] duration metric: took 32.798792ms to wait for elevateKubeSystemPrivileges
	I0617 04:47:44.923989    8538 ops.go:34] apiserver oom_adj: -16
	W0617 04:47:44.924021    8538 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 04:47:44.924035    8538 kubeadm.go:393] duration metric: took 4m11.314278458s to StartCluster
	I0617 04:47:44.924045    8538 settings.go:142] acquiring lock: {Name:mkdf59d9cf591c81341c913869983ffa33afef47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:47:44.924136    8538 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:47:44.924540    8538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/kubeconfig: {Name:mk50fd79b579920a7f11ac34f212a8491ceefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:47:44.924770    8538 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:47:44.928897    8538 out.go:177] * Verifying Kubernetes components...
	I0617 04:47:44.924779    8538 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 04:47:44.924863    8538 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:47:44.933037    8538 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-767000"
	I0617 04:47:44.933039    8538 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-767000"
	I0617 04:47:44.933051    8538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-767000"
	I0617 04:47:44.933053    8538 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-767000"
	W0617 04:47:44.933056    8538 addons.go:243] addon storage-provisioner should already be in state true
	I0617 04:47:44.933067    8538 host.go:66] Checking if "stopped-upgrade-767000" exists ...
	I0617 04:47:44.933101    8538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 04:47:44.937914    8538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 04:47:44.940959    8538 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:47:44.940966    8538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 04:47:44.940973    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:47:44.942107    8538 kapi.go:59] client config for stopped-upgrade-767000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/stopped-upgrade-767000/client.key", CAFile:"/Users/jenkins/minikube-integration/19087-6045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025a0460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0617 04:47:44.942243    8538 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-767000"
	W0617 04:47:44.942249    8538 addons.go:243] addon default-storageclass should already be in state true
	I0617 04:47:44.942261    8538 host.go:66] Checking if "stopped-upgrade-767000" exists ...
	I0617 04:47:44.943004    8538 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 04:47:44.943008    8538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 04:47:44.943012    8538 sshutil.go:53] new ssh client: &{IP:localhost Port:51472 SSHKeyPath:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/stopped-upgrade-767000/id_rsa Username:docker}
	I0617 04:47:45.006041    8538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 04:47:45.011549    8538 api_server.go:52] waiting for apiserver process to appear ...
	I0617 04:47:45.011589    8538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 04:47:45.015226    8538 api_server.go:72] duration metric: took 90.447459ms to wait for apiserver process to appear ...
	I0617 04:47:45.015234    8538 api_server.go:88] waiting for apiserver healthz status ...
	I0617 04:47:45.015240    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:45.035675    8538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 04:47:45.037192    8538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 04:47:50.017281    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:50.017308    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:47:55.017464    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:47:55.017491    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:00.017934    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:00.017957    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:05.018313    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:05.018356    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:10.018882    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:10.018932    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:15.019690    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:15.019736    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0617 04:48:15.412329    8538 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0617 04:48:15.415460    8538 out.go:177] * Enabled addons: storage-provisioner
	I0617 04:48:15.422463    8538 addons.go:510] duration metric: took 30.498001834s for enable addons: enabled=[storage-provisioner]
	I0617 04:48:20.020574    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:20.020591    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:25.021671    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:25.021729    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:30.023186    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:30.023246    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:35.025323    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:35.025346    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:40.027473    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:40.027530    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:48:45.029657    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:48:45.029740    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:48:45.040317    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:48:45.040385    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:48:45.050678    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:48:45.050745    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:48:45.061875    8538 logs.go:276] 2 containers: [2b86f5d1bc61 e0ed9c632c77]
	I0617 04:48:45.061946    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:48:45.072866    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:48:45.072934    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:48:45.083170    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:48:45.083240    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:48:45.093596    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:48:45.093666    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:48:45.103857    8538 logs.go:276] 0 containers: []
	W0617 04:48:45.103869    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:48:45.103924    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:48:45.114889    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:48:45.114902    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:48:45.114907    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:48:45.151180    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:48:45.151192    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:48:45.166167    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:48:45.166179    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:48:45.177729    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:48:45.177743    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:48:45.193717    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:48:45.193732    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:48:45.208595    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:48:45.208611    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:48:45.219870    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:48:45.219881    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:48:45.244431    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:48:45.244442    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:48:45.274140    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:48:45.274232    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:48:45.275286    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:48:45.275292    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:48:45.279534    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:48:45.279540    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:48:45.294316    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:48:45.294330    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:48:45.309665    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:48:45.309676    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:48:45.321756    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:48:45.321767    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:48:45.341076    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:48:45.341086    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:48:45.341112    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:48:45.341115    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:48:45.341119    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:48:45.341176    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:48:45.341180    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:48:55.344114    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:49:00.346346    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:49:00.346460    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:49:00.366434    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:49:00.366500    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:49:00.378946    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:49:00.379020    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:49:00.391030    8538 logs.go:276] 2 containers: [2b86f5d1bc61 e0ed9c632c77]
	I0617 04:49:00.391099    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:49:00.403015    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:49:00.403090    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:49:00.414954    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:49:00.415027    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:49:00.426556    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:49:00.426628    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:49:00.437973    8538 logs.go:276] 0 containers: []
	W0617 04:49:00.437990    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:49:00.438050    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:49:00.449676    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:49:00.449696    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:49:00.449703    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:49:00.463120    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:49:00.463139    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:49:00.488991    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:49:00.489003    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:49:00.503104    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:49:00.503117    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:49:00.522903    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:49:00.522913    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:49:00.553580    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:00.553675    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:00.554797    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:49:00.554801    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:49:00.558824    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:49:00.558833    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:49:00.599922    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:49:00.599935    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:49:00.614596    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:49:00.614605    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:49:00.626014    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:49:00.626026    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:49:00.650912    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:49:00.650920    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:49:00.662642    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:49:00.662654    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:49:00.677565    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:49:00.677577    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:49:00.693403    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:00.693414    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:49:00.693442    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:49:00.693446    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:00.693452    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:00.693457    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:00.693460    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:49:10.697532    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:49:15.699513    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:49:15.699970    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:49:15.740261    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:49:15.740384    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:49:15.761833    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:49:15.761943    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:49:15.776857    8538 logs.go:276] 2 containers: [2b86f5d1bc61 e0ed9c632c77]
	I0617 04:49:15.776939    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:49:15.789497    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:49:15.789555    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:49:15.800036    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:49:15.800103    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:49:15.810123    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:49:15.810190    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:49:15.819850    8538 logs.go:276] 0 containers: []
	W0617 04:49:15.819863    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:49:15.819924    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:49:15.834010    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:49:15.834026    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:49:15.834032    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:49:15.848479    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:49:15.848489    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:49:15.860198    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:49:15.860207    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:49:15.891768    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:15.891870    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:15.893037    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:49:15.893044    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:49:15.907109    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:49:15.907119    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:49:15.920958    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:49:15.920968    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:49:15.932417    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:49:15.932428    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:49:15.944163    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:49:15.944173    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:49:15.955510    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:49:15.955521    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:49:15.976358    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:49:15.976367    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:49:16.001380    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:49:16.001387    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:49:16.005684    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:49:16.005692    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:49:16.041195    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:49:16.041208    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:49:16.052625    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:16.052636    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:49:16.052667    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:49:16.052671    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:16.052678    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:16.052683    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:16.052685    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:49:26.056770    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:49:31.059067    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:49:31.059540    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:49:31.107505    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:49:31.107639    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:49:31.125794    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:49:31.125872    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:49:31.139572    8538 logs.go:276] 2 containers: [2b86f5d1bc61 e0ed9c632c77]
	I0617 04:49:31.139628    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:49:31.150965    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:49:31.151030    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:49:31.161529    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:49:31.161595    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:49:31.172013    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:49:31.172074    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:49:31.182368    8538 logs.go:276] 0 containers: []
	W0617 04:49:31.182385    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:49:31.182441    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:49:31.194020    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:49:31.194033    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:49:31.194038    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:49:31.213660    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:49:31.213670    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:49:31.239091    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:49:31.239105    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:49:31.251602    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:49:31.251611    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:49:31.256329    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:49:31.256340    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:49:31.271356    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:49:31.271368    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:49:31.289126    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:49:31.289137    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:49:31.301671    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:49:31.301682    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:49:31.313942    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:49:31.313954    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:49:31.326077    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:49:31.326087    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:49:31.340377    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:49:31.340389    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:49:31.371891    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:31.371989    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:31.373079    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:49:31.373087    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:49:31.413461    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:49:31.413471    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:49:31.430191    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:31.430201    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:49:31.430228    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:49:31.430233    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:31.430244    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:31.430249    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:31.430251    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:49:41.434392    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:49:46.437258    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:49:46.437722    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:49:46.483584    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:49:46.483737    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:49:46.503637    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:49:46.503728    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:49:46.518079    8538 logs.go:276] 2 containers: [2b86f5d1bc61 e0ed9c632c77]
	I0617 04:49:46.518143    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:49:46.532728    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:49:46.532797    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:49:46.543412    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:49:46.543481    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:49:46.554050    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:49:46.554105    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:49:46.566311    8538 logs.go:276] 0 containers: []
	W0617 04:49:46.566322    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:49:46.566369    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:49:46.580729    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:49:46.580748    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:49:46.580754    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:49:46.585080    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:49:46.585086    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:49:46.623547    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:49:46.623559    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:49:46.636280    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:49:46.636291    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:49:46.647686    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:49:46.647699    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:49:46.659048    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:49:46.659064    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:49:46.674387    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:49:46.674396    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:49:46.686031    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:49:46.686042    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:49:46.703290    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:49:46.703299    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:49:46.734650    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:46.734746    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:46.735838    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:49:46.735846    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:49:46.749939    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:49:46.749951    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:49:46.767518    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:49:46.767531    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:49:46.791034    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:49:46.791042    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:49:46.802389    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:46.802403    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:49:46.802428    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:49:46.802434    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:49:46.802437    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:49:46.802442    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:49:46.802444    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:49:56.806519    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:50:01.808834    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:50:01.809257    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:50:01.845627    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:50:01.845757    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:50:01.867702    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:50:01.867818    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:50:01.882430    8538 logs.go:276] 4 containers: [ebe4d51133c0 b067f471cccf 2b86f5d1bc61 e0ed9c632c77]
	I0617 04:50:01.882503    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:50:01.894808    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:50:01.894875    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:50:01.905713    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:50:01.905773    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:50:01.916826    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:50:01.916893    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:50:01.931132    8538 logs.go:276] 0 containers: []
	W0617 04:50:01.931142    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:50:01.931191    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:50:01.941706    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:50:01.941720    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:50:01.941725    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:50:01.966420    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:50:01.966426    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:50:01.980003    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:50:01.980013    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:50:01.992106    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:50:01.992117    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:50:02.009848    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:50:02.009859    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:50:02.025513    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:50:02.025526    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:50:02.030250    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:50:02.030259    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:50:02.041943    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:50:02.041956    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:50:02.056711    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:50:02.056722    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:50:02.068581    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:50:02.068596    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:50:02.086469    8538 logs.go:123] Gathering logs for coredns [b067f471cccf] ...
	I0617 04:50:02.086482    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b067f471cccf"
	I0617 04:50:02.103661    8538 logs.go:123] Gathering logs for coredns [ebe4d51133c0] ...
	I0617 04:50:02.103672    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe4d51133c0"
	I0617 04:50:02.114832    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:50:02.114843    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:50:02.127052    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:50:02.127064    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:50:02.156926    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:02.157017    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:02.158072    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:50:02.158076    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:50:02.192132    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:02.192142    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:50:02.192172    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:50:02.192178    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:02.192181    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:02.192185    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:02.192188    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:12.196290    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:50:17.197860    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:50:17.198166    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:50:17.225462    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:50:17.225578    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:50:17.243448    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:50:17.243533    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:50:17.258043    8538 logs.go:276] 4 containers: [ebe4d51133c0 b067f471cccf 2b86f5d1bc61 e0ed9c632c77]
	I0617 04:50:17.258121    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:50:17.269498    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:50:17.269565    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:50:17.280212    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:50:17.280268    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:50:17.295858    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:50:17.295927    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:50:17.305813    8538 logs.go:276] 0 containers: []
	W0617 04:50:17.305825    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:50:17.305881    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:50:17.316316    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:50:17.316333    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:50:17.316339    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:50:17.329816    8538 logs.go:123] Gathering logs for coredns [ebe4d51133c0] ...
	I0617 04:50:17.329828    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe4d51133c0"
	I0617 04:50:17.341253    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:50:17.341267    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:50:17.370970    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:17.371062    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:17.372118    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:50:17.372123    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:50:17.408317    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:50:17.408328    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:50:17.422753    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:50:17.422767    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:50:17.434718    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:50:17.434729    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:50:17.449300    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:50:17.449313    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:50:17.467114    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:50:17.467129    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:50:17.488602    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:50:17.488614    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:50:17.503946    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:50:17.503959    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:50:17.508527    8538 logs.go:123] Gathering logs for coredns [b067f471cccf] ...
	I0617 04:50:17.508535    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b067f471cccf"
	I0617 04:50:17.520908    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:50:17.520917    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:50:17.532613    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:50:17.532625    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:50:17.544340    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:50:17.544353    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:50:17.568670    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:17.568681    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:50:17.568706    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:50:17.568711    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:17.568715    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:17.568720    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:17.568722    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:27.571228    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:50:32.573863    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:50:32.574257    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:50:32.607971    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:50:32.608083    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:50:32.627895    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:50:32.627986    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:50:32.642437    8538 logs.go:276] 4 containers: [ebe4d51133c0 b067f471cccf 2b86f5d1bc61 e0ed9c632c77]
	I0617 04:50:32.642504    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:50:32.654109    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:50:32.654168    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:50:32.664572    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:50:32.664628    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:50:32.678718    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:50:32.678777    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:50:32.688813    8538 logs.go:276] 0 containers: []
	W0617 04:50:32.688823    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:50:32.688871    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:50:32.699158    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:50:32.699173    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:50:32.699178    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:50:32.731055    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:32.731148    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:32.732217    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:50:32.732222    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:50:32.766682    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:50:32.766691    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:50:32.778805    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:50:32.778816    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:50:32.796713    8538 logs.go:123] Gathering logs for coredns [ebe4d51133c0] ...
	I0617 04:50:32.796724    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe4d51133c0"
	I0617 04:50:32.808020    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:50:32.808034    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:50:32.833136    8538 logs.go:123] Gathering logs for coredns [b067f471cccf] ...
	I0617 04:50:32.833143    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b067f471cccf"
	I0617 04:50:32.847412    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:50:32.847423    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:50:32.861914    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:50:32.861923    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:50:32.873526    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:50:32.873535    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:50:32.884825    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:50:32.884837    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:50:32.889307    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:50:32.889317    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:50:32.902770    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:50:32.902779    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:50:32.938713    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:50:32.938725    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:50:32.952062    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:50:32.952078    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:50:32.963570    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:32.963582    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:50:32.963610    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:50:32.963615    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:32.963618    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:32.963623    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:32.963626    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:42.967526    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:50:47.970085    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:50:47.970152    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:50:47.982227    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:50:47.982286    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:50:47.996931    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:50:47.997005    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:50:48.008825    8538 logs.go:276] 4 containers: [ebe4d51133c0 b067f471cccf 2b86f5d1bc61 e0ed9c632c77]
	I0617 04:50:48.008888    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:50:48.020574    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:50:48.020654    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:50:48.037795    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:50:48.037871    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:50:48.053025    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:50:48.053079    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:50:48.063646    8538 logs.go:276] 0 containers: []
	W0617 04:50:48.063657    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:50:48.063697    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:50:48.080983    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:50:48.080996    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:50:48.081000    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:50:48.095860    8538 logs.go:123] Gathering logs for coredns [ebe4d51133c0] ...
	I0617 04:50:48.095871    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe4d51133c0"
	I0617 04:50:48.109128    8538 logs.go:123] Gathering logs for coredns [b067f471cccf] ...
	I0617 04:50:48.109143    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b067f471cccf"
	I0617 04:50:48.122505    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:50:48.122516    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:50:48.137380    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:50:48.137389    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:50:48.141625    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:50:48.141631    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:50:48.156292    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:50:48.156302    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:50:48.174335    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:50:48.174342    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:50:48.199200    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:50:48.199210    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:50:48.230555    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:48.230654    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:48.231767    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:50:48.231775    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:50:48.245294    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:50:48.245306    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:50:48.261883    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:50:48.261899    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:50:48.276086    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:50:48.276099    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:50:48.289338    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:50:48.289354    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:50:48.337452    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:50:48.337469    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:50:48.354563    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:48.354574    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:50:48.354606    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:50:48.354610    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:50:48.354615    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:50:48.354619    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:48.354621    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:58.356614    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:51:03.358492    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:51:03.358641    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:51:03.372343    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:51:03.372414    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:51:03.383481    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:51:03.383555    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:51:03.394045    8538 logs.go:276] 4 containers: [ebe4d51133c0 b067f471cccf 2b86f5d1bc61 e0ed9c632c77]
	I0617 04:51:03.394110    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:51:03.404817    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:51:03.404890    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:51:03.415002    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:51:03.415069    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:51:03.424617    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:51:03.424680    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:51:03.435009    8538 logs.go:276] 0 containers: []
	W0617 04:51:03.435020    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:51:03.435070    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:51:03.445192    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:51:03.445209    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:51:03.445214    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:51:03.459443    8538 logs.go:123] Gathering logs for coredns [b067f471cccf] ...
	I0617 04:51:03.459457    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b067f471cccf"
	I0617 04:51:03.476686    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:51:03.476699    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:51:03.494516    8538 logs.go:123] Gathering logs for coredns [ebe4d51133c0] ...
	I0617 04:51:03.494529    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe4d51133c0"
	I0617 04:51:03.510731    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:51:03.510741    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:51:03.524820    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:51:03.524834    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:51:03.536377    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:51:03.536389    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:51:03.547795    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:51:03.547805    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:51:03.577332    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:51:03.577427    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:51:03.578552    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:51:03.578558    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:51:03.612569    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:51:03.612582    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:51:03.624376    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:51:03.624388    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:51:03.628804    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:51:03.628813    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:51:03.642752    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:51:03.642764    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:51:03.658534    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:51:03.658549    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:51:03.670095    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:51:03.670105    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:51:03.694467    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:03.694475    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:51:03.694498    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:51:03.694502    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:51:03.694506    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:51:03.694510    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:03.694513    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:13.697452    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:51:18.699879    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:51:18.700332    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:51:18.743076    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:51:18.743194    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:51:18.762755    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:51:18.762856    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:51:18.777108    8538 logs.go:276] 4 containers: [ebe4d51133c0 b067f471cccf 2b86f5d1bc61 e0ed9c632c77]
	I0617 04:51:18.777184    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:51:18.789680    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:51:18.789747    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:51:18.800484    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:51:18.800544    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:51:18.811200    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:51:18.811266    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:51:18.821352    8538 logs.go:276] 0 containers: []
	W0617 04:51:18.821362    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:51:18.821411    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:51:18.832983    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:51:18.832999    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:51:18.833005    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:51:18.844863    8538 logs.go:123] Gathering logs for coredns [ebe4d51133c0] ...
	I0617 04:51:18.844877    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe4d51133c0"
	I0617 04:51:18.856592    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:51:18.856604    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:51:18.867921    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:51:18.867930    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:51:18.879151    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:51:18.879162    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:51:18.903406    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:51:18.903412    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:51:18.932942    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:51:18.933035    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:51:18.934104    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:51:18.934108    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:51:18.938317    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:51:18.938322    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:51:18.952640    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:51:18.952652    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:51:18.967350    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:51:18.967362    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:51:19.007380    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:51:19.007394    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:51:19.019449    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:51:19.019462    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:51:19.034284    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:51:19.034295    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:51:19.051631    8538 logs.go:123] Gathering logs for coredns [b067f471cccf] ...
	I0617 04:51:19.051641    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b067f471cccf"
	I0617 04:51:19.063488    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:51:19.063500    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:51:19.075514    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:19.075527    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:51:19.075551    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:51:19.075557    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:51:19.075560    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:51:19.075566    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:19.075569    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:29.079006    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:51:34.080505    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:51:34.080791    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0617 04:51:34.109077    8538 logs.go:276] 1 containers: [4fcbb714c869]
	I0617 04:51:34.109189    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0617 04:51:34.128916    8538 logs.go:276] 1 containers: [d6709ca110b2]
	I0617 04:51:34.128987    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0617 04:51:34.141709    8538 logs.go:276] 4 containers: [ebe4d51133c0 b067f471cccf 2b86f5d1bc61 e0ed9c632c77]
	I0617 04:51:34.141784    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0617 04:51:34.153237    8538 logs.go:276] 1 containers: [c42568d88795]
	I0617 04:51:34.153300    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0617 04:51:34.164793    8538 logs.go:276] 1 containers: [a755f81a54f8]
	I0617 04:51:34.164856    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0617 04:51:34.175762    8538 logs.go:276] 1 containers: [0499d7380994]
	I0617 04:51:34.175814    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0617 04:51:34.190968    8538 logs.go:276] 0 containers: []
	W0617 04:51:34.190977    8538 logs.go:278] No container was found matching "kindnet"
	I0617 04:51:34.191030    8538 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0617 04:51:34.201586    8538 logs.go:276] 1 containers: [8c3a87f4fb30]
	I0617 04:51:34.201606    8538 logs.go:123] Gathering logs for kube-apiserver [4fcbb714c869] ...
	I0617 04:51:34.201613    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fcbb714c869"
	I0617 04:51:34.216740    8538 logs.go:123] Gathering logs for coredns [e0ed9c632c77] ...
	I0617 04:51:34.216754    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0ed9c632c77"
	I0617 04:51:34.229461    8538 logs.go:123] Gathering logs for kube-controller-manager [0499d7380994] ...
	I0617 04:51:34.229475    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0499d7380994"
	I0617 04:51:34.248473    8538 logs.go:123] Gathering logs for storage-provisioner [8c3a87f4fb30] ...
	I0617 04:51:34.248484    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3a87f4fb30"
	I0617 04:51:34.261548    8538 logs.go:123] Gathering logs for container status ...
	I0617 04:51:34.261563    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 04:51:34.274291    8538 logs.go:123] Gathering logs for dmesg ...
	I0617 04:51:34.274303    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 04:51:34.279041    8538 logs.go:123] Gathering logs for describe nodes ...
	I0617 04:51:34.279053    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 04:51:34.355421    8538 logs.go:123] Gathering logs for coredns [ebe4d51133c0] ...
	I0617 04:51:34.355436    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe4d51133c0"
	I0617 04:51:34.368198    8538 logs.go:123] Gathering logs for kubelet ...
	I0617 04:51:34.368210    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 04:51:34.399241    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:51:34.399344    8538 logs.go:138] Found kubelet problem: Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:51:34.400469    8538 logs.go:123] Gathering logs for etcd [d6709ca110b2] ...
	I0617 04:51:34.400479    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6709ca110b2"
	I0617 04:51:34.419255    8538 logs.go:123] Gathering logs for coredns [b067f471cccf] ...
	I0617 04:51:34.419271    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b067f471cccf"
	I0617 04:51:34.431406    8538 logs.go:123] Gathering logs for kube-scheduler [c42568d88795] ...
	I0617 04:51:34.431417    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c42568d88795"
	I0617 04:51:34.446513    8538 logs.go:123] Gathering logs for coredns [2b86f5d1bc61] ...
	I0617 04:51:34.446524    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b86f5d1bc61"
	I0617 04:51:34.459230    8538 logs.go:123] Gathering logs for kube-proxy [a755f81a54f8] ...
	I0617 04:51:34.459242    8538 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a755f81a54f8"
	I0617 04:51:34.474689    8538 logs.go:123] Gathering logs for Docker ...
	I0617 04:51:34.474701    8538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0617 04:51:34.499742    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:34.499755    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 04:51:34.499782    8538 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 04:51:34.499786    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: W0617 11:47:57.959073   10340 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	W0617 04:51:34.499824    8538 out.go:239]   Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	  Jun 17 11:47:57 stopped-upgrade-767000 kubelet[10340]: E0617 11:47:57.959127   10340 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-767000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-767000' and this object
	I0617 04:51:34.499831    8538 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:34.499834    8538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:44.503907    8538 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0617 04:51:49.506306    8538 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0617 04:51:49.511095    8538 out.go:177] 
	W0617 04:51:49.515307    8538 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0617 04:51:49.515333    8538 out.go:239] * 
	* 
	W0617 04:51:49.517916    8538 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:51:49.531015    8538 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-767000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (578.16s)

                                                
                                    
x
+
TestPause/serial/Start (10.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-103000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-103000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.021453375s)

                                                
                                                
-- stdout --
	* [pause-103000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-103000" primary control-plane node in "pause-103000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-103000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-103000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-103000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-103000 -n pause-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-103000 -n pause-103000: exit status 7 (53.51225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-684000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-684000 --driver=qemu2 : exit status 80 (9.821683417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-684000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-684000" primary control-plane node in "NoKubernetes-684000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-684000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-684000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000: exit status 7 (67.575834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --driver=qemu2 : exit status 80 (5.381665875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-684000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-684000
	* Restarting existing qemu2 VM for "NoKubernetes-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000: exit status 7 (62.122542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --driver=qemu2 : exit status 80 (5.375083792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-684000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-684000
	* Restarting existing qemu2 VM for "NoKubernetes-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000: exit status 7 (31.238084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-684000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-684000 --driver=qemu2 : exit status 80 (5.459771333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-684000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-684000
	* Restarting existing qemu2 VM for "NoKubernetes-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-684000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-684000 -n NoKubernetes-684000: exit status 7 (40.569334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.929428792s)

                                                
                                                
-- stdout --
	* [auto-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-696000" primary control-plane node in "auto-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:50:11.413036    8769 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:50:11.413168    8769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:11.413172    8769 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:11.413174    8769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:11.413301    8769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:50:11.414415    8769 out.go:298] Setting JSON to false
	I0617 04:50:11.430767    8769 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4781,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:50:11.430824    8769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:50:11.436497    8769 out.go:177] * [auto-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:50:11.444475    8769 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:50:11.444531    8769 notify.go:220] Checking for updates...
	I0617 04:50:11.448491    8769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:50:11.452489    8769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:50:11.455609    8769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:50:11.458489    8769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:50:11.461524    8769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:50:11.464793    8769 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:50:11.464866    8769 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:50:11.464917    8769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:50:11.469493    8769 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:50:11.476402    8769 start.go:297] selected driver: qemu2
	I0617 04:50:11.476406    8769 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:50:11.476411    8769 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:50:11.478533    8769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:50:11.482496    8769 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:50:11.485609    8769 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:50:11.485661    8769 cni.go:84] Creating CNI manager for ""
	I0617 04:50:11.485673    8769 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:50:11.485677    8769 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:50:11.485717    8769 start.go:340] cluster config:
	{Name:auto-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:50:11.490029    8769 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:50:11.498500    8769 out.go:177] * Starting "auto-696000" primary control-plane node in "auto-696000" cluster
	I0617 04:50:11.502455    8769 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:50:11.502484    8769 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:50:11.502492    8769 cache.go:56] Caching tarball of preloaded images
	I0617 04:50:11.502554    8769 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:50:11.502559    8769 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:50:11.502625    8769 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/auto-696000/config.json ...
	I0617 04:50:11.502635    8769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/auto-696000/config.json: {Name:mkdfe73409793f4b27076cc46045bedec4222adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:50:11.502999    8769 start.go:360] acquireMachinesLock for auto-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:11.503032    8769 start.go:364] duration metric: took 26.75µs to acquireMachinesLock for "auto-696000"
	I0617 04:50:11.503041    8769 start.go:93] Provisioning new machine with config: &{Name:auto-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:11.503072    8769 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:11.511497    8769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:11.528005    8769 start.go:159] libmachine.API.Create for "auto-696000" (driver="qemu2")
	I0617 04:50:11.528044    8769 client.go:168] LocalClient.Create starting
	I0617 04:50:11.528103    8769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:11.528132    8769 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:11.528157    8769 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:11.528200    8769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:11.528223    8769 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:11.528231    8769 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:11.528719    8769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:11.681734    8769 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:11.722065    8769 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:11.722073    8769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:11.722236    8769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2
	I0617 04:50:11.734748    8769 main.go:141] libmachine: STDOUT: 
	I0617 04:50:11.734772    8769 main.go:141] libmachine: STDERR: 
	I0617 04:50:11.734834    8769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2 +20000M
	I0617 04:50:11.746144    8769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:11.746162    8769 main.go:141] libmachine: STDERR: 
	I0617 04:50:11.746179    8769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2
	I0617 04:50:11.746184    8769 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:11.746219    8769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:16:2e:17:50:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2
	I0617 04:50:11.748012    8769 main.go:141] libmachine: STDOUT: 
	I0617 04:50:11.748036    8769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:11.748056    8769 client.go:171] duration metric: took 220.008666ms to LocalClient.Create
	I0617 04:50:13.750379    8769 start.go:128] duration metric: took 2.247277209s to createHost
	I0617 04:50:13.750485    8769 start.go:83] releasing machines lock for "auto-696000", held for 2.247466958s
	W0617 04:50:13.750645    8769 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:13.764891    8769 out.go:177] * Deleting "auto-696000" in qemu2 ...
	W0617 04:50:13.793861    8769 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:13.793917    8769 start.go:728] Will try again in 5 seconds ...
	I0617 04:50:18.796048    8769 start.go:360] acquireMachinesLock for auto-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:18.796645    8769 start.go:364] duration metric: took 506.625µs to acquireMachinesLock for "auto-696000"
	I0617 04:50:18.796797    8769 start.go:93] Provisioning new machine with config: &{Name:auto-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:18.797092    8769 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:18.802744    8769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:18.847985    8769 start.go:159] libmachine.API.Create for "auto-696000" (driver="qemu2")
	I0617 04:50:18.848033    8769 client.go:168] LocalClient.Create starting
	I0617 04:50:18.848150    8769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:18.848209    8769 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:18.848230    8769 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:18.848288    8769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:18.848326    8769 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:18.848343    8769 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:18.848805    8769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:19.010782    8769 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:19.249472    8769 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:19.249480    8769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:19.249647    8769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2
	I0617 04:50:19.262901    8769 main.go:141] libmachine: STDOUT: 
	I0617 04:50:19.262925    8769 main.go:141] libmachine: STDERR: 
	I0617 04:50:19.263001    8769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2 +20000M
	I0617 04:50:19.274464    8769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:19.274480    8769 main.go:141] libmachine: STDERR: 
	I0617 04:50:19.274499    8769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2
	I0617 04:50:19.274504    8769 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:19.274537    8769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d8:99:cb:1f:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/auto-696000/disk.qcow2
	I0617 04:50:19.276330    8769 main.go:141] libmachine: STDOUT: 
	I0617 04:50:19.276356    8769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:19.276369    8769 client.go:171] duration metric: took 428.335041ms to LocalClient.Create
	I0617 04:50:21.278566    8769 start.go:128] duration metric: took 2.481415667s to createHost
	I0617 04:50:21.278635    8769 start.go:83] releasing machines lock for "auto-696000", held for 2.4819895s
	W0617 04:50:21.278962    8769 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:21.288526    8769 out.go:177] 
	W0617 04:50:21.291694    8769 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:50:21.291712    8769 out.go:239] * 
	* 
	W0617 04:50:21.293346    8769 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:50:21.302545    8769 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.850367s)

                                                
                                                
-- stdout --
	* [kindnet-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-696000" primary control-plane node in "kindnet-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:50:23.547823    8880 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:50:23.547957    8880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:23.547960    8880 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:23.547962    8880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:23.548098    8880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:50:23.549283    8880 out.go:298] Setting JSON to false
	I0617 04:50:23.565679    8880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4793,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:50:23.565756    8880 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:50:23.570895    8880 out.go:177] * [kindnet-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:50:23.578801    8880 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:50:23.582757    8880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:50:23.578884    8880 notify.go:220] Checking for updates...
	I0617 04:50:23.587311    8880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:50:23.591799    8880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:50:23.594827    8880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:50:23.596384    8880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:50:23.600188    8880 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:50:23.600257    8880 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:50:23.600306    8880 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:50:23.604816    8880 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:50:23.610789    8880 start.go:297] selected driver: qemu2
	I0617 04:50:23.610795    8880 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:50:23.610803    8880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:50:23.613127    8880 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:50:23.616835    8880 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:50:23.618433    8880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:50:23.618452    8880 cni.go:84] Creating CNI manager for "kindnet"
	I0617 04:50:23.618468    8880 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 04:50:23.618497    8880 start.go:340] cluster config:
	{Name:kindnet-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:50:23.622876    8880 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:50:23.630829    8880 out.go:177] * Starting "kindnet-696000" primary control-plane node in "kindnet-696000" cluster
	I0617 04:50:23.634758    8880 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:50:23.634773    8880 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:50:23.634788    8880 cache.go:56] Caching tarball of preloaded images
	I0617 04:50:23.634859    8880 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:50:23.634867    8880 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:50:23.634951    8880 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/kindnet-696000/config.json ...
	I0617 04:50:23.634965    8880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/kindnet-696000/config.json: {Name:mkc8e84c0069886480247ae34321cf816b01741b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:50:23.635352    8880 start.go:360] acquireMachinesLock for kindnet-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:23.635386    8880 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "kindnet-696000"
	I0617 04:50:23.635396    8880 start.go:93] Provisioning new machine with config: &{Name:kindnet-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:23.635425    8880 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:23.640310    8880 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:23.656549    8880 start.go:159] libmachine.API.Create for "kindnet-696000" (driver="qemu2")
	I0617 04:50:23.656578    8880 client.go:168] LocalClient.Create starting
	I0617 04:50:23.656632    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:23.656661    8880 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:23.656671    8880 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:23.656712    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:23.656733    8880 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:23.656746    8880 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:23.657134    8880 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:23.811593    8880 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:23.917312    8880 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:23.917319    8880 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:23.917488    8880 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2
	I0617 04:50:23.930517    8880 main.go:141] libmachine: STDOUT: 
	I0617 04:50:23.930537    8880 main.go:141] libmachine: STDERR: 
	I0617 04:50:23.930608    8880 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2 +20000M
	I0617 04:50:23.941727    8880 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:23.941745    8880 main.go:141] libmachine: STDERR: 
	I0617 04:50:23.941766    8880 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2
	I0617 04:50:23.941770    8880 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:23.941805    8880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:29:39:5b:1c:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2
	I0617 04:50:23.943517    8880 main.go:141] libmachine: STDOUT: 
	I0617 04:50:23.943533    8880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:23.943554    8880 client.go:171] duration metric: took 286.972875ms to LocalClient.Create
	I0617 04:50:25.945805    8880 start.go:128] duration metric: took 2.310371791s to createHost
	I0617 04:50:25.945901    8880 start.go:83] releasing machines lock for "kindnet-696000", held for 2.310528916s
	W0617 04:50:25.945995    8880 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:25.953393    8880 out.go:177] * Deleting "kindnet-696000" in qemu2 ...
	W0617 04:50:25.982528    8880 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:25.982558    8880 start.go:728] Will try again in 5 seconds ...
	I0617 04:50:30.983655    8880 start.go:360] acquireMachinesLock for kindnet-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:30.984273    8880 start.go:364] duration metric: took 463.458µs to acquireMachinesLock for "kindnet-696000"
	I0617 04:50:30.984418    8880 start.go:93] Provisioning new machine with config: &{Name:kindnet-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:30.984748    8880 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:30.994377    8880 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:31.044131    8880 start.go:159] libmachine.API.Create for "kindnet-696000" (driver="qemu2")
	I0617 04:50:31.044194    8880 client.go:168] LocalClient.Create starting
	I0617 04:50:31.044329    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:31.044385    8880 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:31.044400    8880 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:31.044486    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:31.044529    8880 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:31.044550    8880 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:31.045101    8880 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:31.208480    8880 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:31.304130    8880 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:31.304141    8880 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:31.304325    8880 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2
	I0617 04:50:31.317102    8880 main.go:141] libmachine: STDOUT: 
	I0617 04:50:31.317126    8880 main.go:141] libmachine: STDERR: 
	I0617 04:50:31.317179    8880 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2 +20000M
	I0617 04:50:31.328189    8880 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:31.328209    8880 main.go:141] libmachine: STDERR: 
	I0617 04:50:31.328222    8880 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2
	I0617 04:50:31.328227    8880 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:31.328266    8880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:74:82:1a:62:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kindnet-696000/disk.qcow2
	I0617 04:50:31.329978    8880 main.go:141] libmachine: STDOUT: 
	I0617 04:50:31.329997    8880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:31.330008    8880 client.go:171] duration metric: took 285.808542ms to LocalClient.Create
	I0617 04:50:33.332108    8880 start.go:128] duration metric: took 2.347360209s to createHost
	I0617 04:50:33.332201    8880 start.go:83] releasing machines lock for "kindnet-696000", held for 2.347888583s
	W0617 04:50:33.332350    8880 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:33.341355    8880 out.go:177] 
	W0617 04:50:33.344324    8880 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:50:33.344349    8880 out.go:239] * 
	* 
	W0617 04:50:33.345973    8880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:50:33.356308    8880 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.007608583s)

                                                
                                                
-- stdout --
	* [calico-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-696000" primary control-plane node in "calico-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:50:35.697291    8994 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:50:35.697416    8994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:35.697420    8994 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:35.697423    8994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:35.697552    8994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:50:35.698586    8994 out.go:298] Setting JSON to false
	I0617 04:50:35.715145    8994 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4805,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:50:35.715207    8994 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:50:35.722117    8994 out.go:177] * [calico-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:50:35.730138    8994 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:50:35.730193    8994 notify.go:220] Checking for updates...
	I0617 04:50:35.734099    8994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:50:35.735227    8994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:50:35.738056    8994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:50:35.742121    8994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:50:35.743594    8994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:50:35.747416    8994 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:50:35.747484    8994 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:50:35.747530    8994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:50:35.752082    8994 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:50:35.758066    8994 start.go:297] selected driver: qemu2
	I0617 04:50:35.758072    8994 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:50:35.758078    8994 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:50:35.760486    8994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:50:35.764058    8994 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:50:35.765585    8994 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:50:35.765635    8994 cni.go:84] Creating CNI manager for "calico"
	I0617 04:50:35.765645    8994 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0617 04:50:35.765678    8994 start.go:340] cluster config:
	{Name:calico-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:calico-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:50:35.770247    8994 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:50:35.777070    8994 out.go:177] * Starting "calico-696000" primary control-plane node in "calico-696000" cluster
	I0617 04:50:35.781047    8994 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:50:35.781060    8994 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:50:35.781068    8994 cache.go:56] Caching tarball of preloaded images
	I0617 04:50:35.781126    8994 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:50:35.781137    8994 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:50:35.781198    8994 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/calico-696000/config.json ...
	I0617 04:50:35.781209    8994 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/calico-696000/config.json: {Name:mk8c6b730833fc54c0f7c76047a98a095d21b581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:50:35.781582    8994 start.go:360] acquireMachinesLock for calico-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:35.781615    8994 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "calico-696000"
	I0617 04:50:35.781625    8994 start.go:93] Provisioning new machine with config: &{Name:calico-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:35.781654    8994 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:35.790047    8994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:35.806284    8994 start.go:159] libmachine.API.Create for "calico-696000" (driver="qemu2")
	I0617 04:50:35.806315    8994 client.go:168] LocalClient.Create starting
	I0617 04:50:35.806386    8994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:35.806417    8994 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:35.806431    8994 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:35.806480    8994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:35.806503    8994 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:35.806520    8994 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:35.806983    8994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:35.962554    8994 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:36.258147    8994 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:36.258159    8994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:36.258361    8994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2
	I0617 04:50:36.271449    8994 main.go:141] libmachine: STDOUT: 
	I0617 04:50:36.271474    8994 main.go:141] libmachine: STDERR: 
	I0617 04:50:36.271537    8994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2 +20000M
	I0617 04:50:36.282928    8994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:36.282944    8994 main.go:141] libmachine: STDERR: 
	I0617 04:50:36.282956    8994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2
	I0617 04:50:36.282962    8994 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:36.282995    8994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:da:59:7d:31:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2
	I0617 04:50:36.284723    8994 main.go:141] libmachine: STDOUT: 
	I0617 04:50:36.284738    8994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:36.284760    8994 client.go:171] duration metric: took 478.443542ms to LocalClient.Create
	I0617 04:50:38.286996    8994 start.go:128] duration metric: took 2.505335041s to createHost
	I0617 04:50:38.287104    8994 start.go:83] releasing machines lock for "calico-696000", held for 2.505504542s
	W0617 04:50:38.287167    8994 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:38.305637    8994 out.go:177] * Deleting "calico-696000" in qemu2 ...
	W0617 04:50:38.331426    8994 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:38.331449    8994 start.go:728] Will try again in 5 seconds ...
	I0617 04:50:43.332508    8994 start.go:360] acquireMachinesLock for calico-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:43.332764    8994 start.go:364] duration metric: took 213.584µs to acquireMachinesLock for "calico-696000"
	I0617 04:50:43.332843    8994 start.go:93] Provisioning new machine with config: &{Name:calico-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:43.332961    8994 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:43.341262    8994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:43.367673    8994 start.go:159] libmachine.API.Create for "calico-696000" (driver="qemu2")
	I0617 04:50:43.367731    8994 client.go:168] LocalClient.Create starting
	I0617 04:50:43.367829    8994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:43.367873    8994 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:43.367885    8994 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:43.367934    8994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:43.367963    8994 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:43.367973    8994 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:43.368310    8994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:43.524832    8994 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:43.604634    8994 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:43.604645    8994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:43.604836    8994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2
	I0617 04:50:43.617472    8994 main.go:141] libmachine: STDOUT: 
	I0617 04:50:43.617490    8994 main.go:141] libmachine: STDERR: 
	I0617 04:50:43.617540    8994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2 +20000M
	I0617 04:50:43.628809    8994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:43.628825    8994 main.go:141] libmachine: STDERR: 
	I0617 04:50:43.628847    8994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2
	I0617 04:50:43.628852    8994 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:43.628881    8994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:70:96:2a:7f:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/calico-696000/disk.qcow2
	I0617 04:50:43.630592    8994 main.go:141] libmachine: STDOUT: 
	I0617 04:50:43.630610    8994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:43.630625    8994 client.go:171] duration metric: took 262.884541ms to LocalClient.Create
	I0617 04:50:45.632802    8994 start.go:128] duration metric: took 2.299837667s to createHost
	I0617 04:50:45.632929    8994 start.go:83] releasing machines lock for "calico-696000", held for 2.300161459s
	W0617 04:50:45.633311    8994 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:45.646182    8994 out.go:177] 
	W0617 04:50:45.649108    8994 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:50:45.649128    8994 out.go:239] * 
	* 
	W0617 04:50:45.650587    8994 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:50:45.663031    8994 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.821013792s)

                                                
                                                
-- stdout --
	* [custom-flannel-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-696000" primary control-plane node in "custom-flannel-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:50:48.113271    9119 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:50:48.113428    9119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:48.113432    9119 out.go:304] Setting ErrFile to fd 2...
	I0617 04:50:48.113434    9119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:50:48.113585    9119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:50:48.114857    9119 out.go:298] Setting JSON to false
	I0617 04:50:48.133643    9119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4818,"bootTime":1718620230,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:50:48.133730    9119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:50:48.138685    9119 out.go:177] * [custom-flannel-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:50:48.146680    9119 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:50:48.146816    9119 notify.go:220] Checking for updates...
	I0617 04:50:48.150500    9119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:50:48.153617    9119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:50:48.156592    9119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:50:48.159640    9119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:50:48.162651    9119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:50:48.166061    9119 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:50:48.166136    9119 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:50:48.166193    9119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:50:48.170589    9119 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:50:48.177641    9119 start.go:297] selected driver: qemu2
	I0617 04:50:48.177647    9119 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:50:48.177653    9119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:50:48.179949    9119 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:50:48.182565    9119 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:50:48.185622    9119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:50:48.185668    9119 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0617 04:50:48.185677    9119 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0617 04:50:48.185709    9119 start.go:340] cluster config:
	{Name:custom-flannel-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:50:48.190034    9119 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:50:48.197476    9119 out.go:177] * Starting "custom-flannel-696000" primary control-plane node in "custom-flannel-696000" cluster
	I0617 04:50:48.201530    9119 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:50:48.201557    9119 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:50:48.201566    9119 cache.go:56] Caching tarball of preloaded images
	I0617 04:50:48.201633    9119 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:50:48.201638    9119 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:50:48.201694    9119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/custom-flannel-696000/config.json ...
	I0617 04:50:48.201704    9119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/custom-flannel-696000/config.json: {Name:mk0f24671142bdd48f452a81a7255b707a9b007e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:50:48.201984    9119 start.go:360] acquireMachinesLock for custom-flannel-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:48.202035    9119 start.go:364] duration metric: took 44.125µs to acquireMachinesLock for "custom-flannel-696000"
	I0617 04:50:48.202049    9119 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:48.202077    9119 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:48.205452    9119 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:48.220980    9119 start.go:159] libmachine.API.Create for "custom-flannel-696000" (driver="qemu2")
	I0617 04:50:48.221016    9119 client.go:168] LocalClient.Create starting
	I0617 04:50:48.221092    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:48.221127    9119 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:48.221136    9119 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:48.221183    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:48.221206    9119 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:48.221213    9119 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:48.221601    9119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:48.392706    9119 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:48.523170    9119 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:48.523178    9119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:48.523353    9119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2
	I0617 04:50:48.536019    9119 main.go:141] libmachine: STDOUT: 
	I0617 04:50:48.536043    9119 main.go:141] libmachine: STDERR: 
	I0617 04:50:48.536101    9119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2 +20000M
	I0617 04:50:48.547315    9119 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:48.547331    9119 main.go:141] libmachine: STDERR: 
	I0617 04:50:48.547346    9119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2
	I0617 04:50:48.547349    9119 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:48.547376    9119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:4f:6d:49:ed:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2
	I0617 04:50:48.549111    9119 main.go:141] libmachine: STDOUT: 
	I0617 04:50:48.549126    9119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:48.549148    9119 client.go:171] duration metric: took 328.126875ms to LocalClient.Create
	I0617 04:50:50.551368    9119 start.go:128] duration metric: took 2.349284292s to createHost
	I0617 04:50:50.551439    9119 start.go:83] releasing machines lock for "custom-flannel-696000", held for 2.349418875s
	W0617 04:50:50.551513    9119 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:50.558959    9119 out.go:177] * Deleting "custom-flannel-696000" in qemu2 ...
	W0617 04:50:50.589728    9119 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:50.589760    9119 start.go:728] Will try again in 5 seconds ...
	I0617 04:50:55.591822    9119 start.go:360] acquireMachinesLock for custom-flannel-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:50:55.592109    9119 start.go:364] duration metric: took 235.583µs to acquireMachinesLock for "custom-flannel-696000"
	I0617 04:50:55.592177    9119 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:50:55.592274    9119 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:50:55.598236    9119 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:50:55.628752    9119 start.go:159] libmachine.API.Create for "custom-flannel-696000" (driver="qemu2")
	I0617 04:50:55.628786    9119 client.go:168] LocalClient.Create starting
	I0617 04:50:55.628871    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:50:55.628927    9119 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:55.628943    9119 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:55.628993    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:50:55.629031    9119 main.go:141] libmachine: Decoding PEM data...
	I0617 04:50:55.629040    9119 main.go:141] libmachine: Parsing certificate...
	I0617 04:50:55.629576    9119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:50:55.789051    9119 main.go:141] libmachine: Creating SSH key...
	I0617 04:50:55.823109    9119 main.go:141] libmachine: Creating Disk image...
	I0617 04:50:55.823114    9119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:50:55.823297    9119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2
	I0617 04:50:55.836099    9119 main.go:141] libmachine: STDOUT: 
	I0617 04:50:55.836130    9119 main.go:141] libmachine: STDERR: 
	I0617 04:50:55.836176    9119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2 +20000M
	I0617 04:50:55.847099    9119 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:50:55.847114    9119 main.go:141] libmachine: STDERR: 
	I0617 04:50:55.847123    9119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2
	I0617 04:50:55.847127    9119 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:50:55.847162    9119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:17:69:7e:20:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/custom-flannel-696000/disk.qcow2
	I0617 04:50:55.848901    9119 main.go:141] libmachine: STDOUT: 
	I0617 04:50:55.848923    9119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:50:55.848936    9119 client.go:171] duration metric: took 220.1485ms to LocalClient.Create
	I0617 04:50:57.851115    9119 start.go:128] duration metric: took 2.258836583s to createHost
	I0617 04:50:57.851178    9119 start.go:83] releasing machines lock for "custom-flannel-696000", held for 2.259077041s
	W0617 04:50:57.851492    9119 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:50:57.866233    9119 out.go:177] 
	W0617 04:50:57.871329    9119 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:50:57.871395    9119 out.go:239] * 
	* 
	W0617 04:50:57.874470    9119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:50:57.889140    9119 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.837103417s)

                                                
                                                
-- stdout --
	* [false-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-696000" primary control-plane node in "false-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:51:00.339282    9238 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:51:00.339401    9238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:00.339403    9238 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:00.339406    9238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:00.339537    9238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:51:00.340704    9238 out.go:298] Setting JSON to false
	I0617 04:51:00.357023    9238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4830,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:51:00.357096    9238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:51:00.364041    9238 out.go:177] * [false-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:51:00.370996    9238 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:51:00.371056    9238 notify.go:220] Checking for updates...
	I0617 04:51:00.378001    9238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:51:00.381008    9238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:51:00.385009    9238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:51:00.387926    9238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:51:00.390938    9238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:51:00.394293    9238 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:51:00.394378    9238 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:51:00.394425    9238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:51:00.398989    9238 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:51:00.406007    9238 start.go:297] selected driver: qemu2
	I0617 04:51:00.406012    9238 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:51:00.406020    9238 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:51:00.408122    9238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:51:00.411055    9238 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:51:00.415047    9238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:51:00.415076    9238 cni.go:84] Creating CNI manager for "false"
	I0617 04:51:00.415105    9238 start.go:340] cluster config:
	{Name:false-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:false-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:51:00.419382    9238 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:51:00.426992    9238 out.go:177] * Starting "false-696000" primary control-plane node in "false-696000" cluster
	I0617 04:51:00.430999    9238 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:51:00.431027    9238 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:51:00.431037    9238 cache.go:56] Caching tarball of preloaded images
	I0617 04:51:00.431102    9238 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:51:00.431107    9238 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:51:00.431171    9238 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/false-696000/config.json ...
	I0617 04:51:00.431181    9238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/false-696000/config.json: {Name:mk68680a0bc403720add276e97912cb05038e03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:51:00.431389    9238 start.go:360] acquireMachinesLock for false-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:00.431420    9238 start.go:364] duration metric: took 26.166µs to acquireMachinesLock for "false-696000"
	I0617 04:51:00.431431    9238 start.go:93] Provisioning new machine with config: &{Name:false-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:00.431457    9238 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:00.440044    9238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:00.456441    9238 start.go:159] libmachine.API.Create for "false-696000" (driver="qemu2")
	I0617 04:51:00.456478    9238 client.go:168] LocalClient.Create starting
	I0617 04:51:00.456536    9238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:00.456565    9238 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:00.456576    9238 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:00.456619    9238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:00.456643    9238 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:00.456652    9238 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:00.457059    9238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:00.612317    9238 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:00.689153    9238 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:00.689158    9238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:00.689307    9238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2
	I0617 04:51:00.702548    9238 main.go:141] libmachine: STDOUT: 
	I0617 04:51:00.702573    9238 main.go:141] libmachine: STDERR: 
	I0617 04:51:00.702633    9238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2 +20000M
	I0617 04:51:00.714423    9238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:00.714441    9238 main.go:141] libmachine: STDERR: 
	I0617 04:51:00.714460    9238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2
	I0617 04:51:00.714464    9238 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:00.714498    9238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:45:aa:6b:3b:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2
	I0617 04:51:00.716350    9238 main.go:141] libmachine: STDOUT: 
	I0617 04:51:00.716365    9238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:00.716384    9238 client.go:171] duration metric: took 259.902334ms to LocalClient.Create
	I0617 04:51:02.718565    9238 start.go:128] duration metric: took 2.28710825s to createHost
	I0617 04:51:02.718649    9238 start.go:83] releasing machines lock for "false-696000", held for 2.287244791s
	W0617 04:51:02.718711    9238 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:02.728476    9238 out.go:177] * Deleting "false-696000" in qemu2 ...
	W0617 04:51:02.754505    9238 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:02.754527    9238 start.go:728] Will try again in 5 seconds ...
	I0617 04:51:07.756727    9238 start.go:360] acquireMachinesLock for false-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:07.757247    9238 start.go:364] duration metric: took 408.25µs to acquireMachinesLock for "false-696000"
	I0617 04:51:07.757373    9238 start.go:93] Provisioning new machine with config: &{Name:false-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:07.757684    9238 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:07.765158    9238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:07.816005    9238 start.go:159] libmachine.API.Create for "false-696000" (driver="qemu2")
	I0617 04:51:07.816091    9238 client.go:168] LocalClient.Create starting
	I0617 04:51:07.816242    9238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:07.816324    9238 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:07.816341    9238 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:07.816406    9238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:07.816453    9238 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:07.816465    9238 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:07.817017    9238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:07.981128    9238 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:08.086064    9238 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:08.086074    9238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:08.086247    9238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2
	I0617 04:51:08.098901    9238 main.go:141] libmachine: STDOUT: 
	I0617 04:51:08.098921    9238 main.go:141] libmachine: STDERR: 
	I0617 04:51:08.098975    9238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2 +20000M
	I0617 04:51:08.110000    9238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:08.110016    9238 main.go:141] libmachine: STDERR: 
	I0617 04:51:08.110029    9238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2
	I0617 04:51:08.110035    9238 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:08.110065    9238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:ab:45:e5:f0:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/false-696000/disk.qcow2
	I0617 04:51:08.111844    9238 main.go:141] libmachine: STDOUT: 
	I0617 04:51:08.111861    9238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:08.111874    9238 client.go:171] duration metric: took 295.780833ms to LocalClient.Create
	I0617 04:51:10.113968    9238 start.go:128] duration metric: took 2.356285083s to createHost
	I0617 04:51:10.114038    9238 start.go:83] releasing machines lock for "false-696000", held for 2.356767375s
	W0617 04:51:10.114169    9238 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:10.122586    9238 out.go:177] 
	W0617 04:51:10.126521    9238 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:51:10.126559    9238 out.go:239] * 
	* 
	W0617 04:51:10.127377    9238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:51:10.138537    9238 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.978256375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-696000" primary control-plane node in "enable-default-cni-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:51:12.279985    9351 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:51:12.280129    9351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:12.280132    9351 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:12.280135    9351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:12.280296    9351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:51:12.281378    9351 out.go:298] Setting JSON to false
	I0617 04:51:12.297773    9351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4842,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:51:12.297837    9351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:51:12.305431    9351 out.go:177] * [enable-default-cni-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:51:12.312436    9351 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:51:12.312519    9351 notify.go:220] Checking for updates...
	I0617 04:51:12.316318    9351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:51:12.319377    9351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:51:12.322339    9351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:51:12.323800    9351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:51:12.326321    9351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:51:12.329777    9351 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:51:12.329844    9351 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:51:12.329888    9351 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:51:12.333218    9351 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:51:12.340371    9351 start.go:297] selected driver: qemu2
	I0617 04:51:12.340376    9351 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:51:12.340384    9351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:51:12.342694    9351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:51:12.345362    9351 out.go:177] * Automatically selected the socket_vmnet network
	E0617 04:51:12.348401    9351 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0617 04:51:12.348418    9351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:51:12.348470    9351 cni.go:84] Creating CNI manager for "bridge"
	I0617 04:51:12.348475    9351 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:51:12.348522    9351 start.go:340] cluster config:
	{Name:enable-default-cni-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:51:12.352840    9351 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:51:12.359358    9351 out.go:177] * Starting "enable-default-cni-696000" primary control-plane node in "enable-default-cni-696000" cluster
	I0617 04:51:12.363344    9351 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:51:12.363357    9351 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:51:12.363364    9351 cache.go:56] Caching tarball of preloaded images
	I0617 04:51:12.363428    9351 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:51:12.363433    9351 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:51:12.363487    9351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/enable-default-cni-696000/config.json ...
	I0617 04:51:12.363497    9351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/enable-default-cni-696000/config.json: {Name:mk1289a0a175b236d8b25c3627f31df59a76a869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:51:12.363855    9351 start.go:360] acquireMachinesLock for enable-default-cni-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:12.363888    9351 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "enable-default-cni-696000"
	I0617 04:51:12.363900    9351 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:12.363951    9351 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:12.367393    9351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:12.383634    9351 start.go:159] libmachine.API.Create for "enable-default-cni-696000" (driver="qemu2")
	I0617 04:51:12.383662    9351 client.go:168] LocalClient.Create starting
	I0617 04:51:12.383716    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:12.383749    9351 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:12.383760    9351 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:12.383801    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:12.383824    9351 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:12.383831    9351 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:12.384357    9351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:12.540365    9351 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:12.688655    9351 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:12.688662    9351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:12.688841    9351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2
	I0617 04:51:12.702270    9351 main.go:141] libmachine: STDOUT: 
	I0617 04:51:12.702290    9351 main.go:141] libmachine: STDERR: 
	I0617 04:51:12.702353    9351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2 +20000M
	I0617 04:51:12.714536    9351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:12.714552    9351 main.go:141] libmachine: STDERR: 
	I0617 04:51:12.714580    9351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2
	I0617 04:51:12.714585    9351 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:12.714617    9351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:17:7f:94:cc:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2
	I0617 04:51:12.716454    9351 main.go:141] libmachine: STDOUT: 
	I0617 04:51:12.716468    9351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:12.716494    9351 client.go:171] duration metric: took 332.829042ms to LocalClient.Create
	I0617 04:51:14.718685    9351 start.go:128] duration metric: took 2.354733542s to createHost
	I0617 04:51:14.718740    9351 start.go:83] releasing machines lock for "enable-default-cni-696000", held for 2.354869834s
	W0617 04:51:14.718780    9351 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:14.729309    9351 out.go:177] * Deleting "enable-default-cni-696000" in qemu2 ...
	W0617 04:51:14.751098    9351 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:14.751116    9351 start.go:728] Will try again in 5 seconds ...
	I0617 04:51:19.753234    9351 start.go:360] acquireMachinesLock for enable-default-cni-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:19.753566    9351 start.go:364] duration metric: took 263.375µs to acquireMachinesLock for "enable-default-cni-696000"
	I0617 04:51:19.753604    9351 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:19.753771    9351 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:19.761643    9351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:19.799628    9351 start.go:159] libmachine.API.Create for "enable-default-cni-696000" (driver="qemu2")
	I0617 04:51:19.799678    9351 client.go:168] LocalClient.Create starting
	I0617 04:51:19.799786    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:19.799837    9351 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:19.799849    9351 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:19.799899    9351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:19.799936    9351 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:19.799945    9351 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:19.800467    9351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:19.961063    9351 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:20.156463    9351 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:20.156477    9351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:20.156676    9351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2
	I0617 04:51:20.170180    9351 main.go:141] libmachine: STDOUT: 
	I0617 04:51:20.170221    9351 main.go:141] libmachine: STDERR: 
	I0617 04:51:20.170307    9351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2 +20000M
	I0617 04:51:20.181820    9351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:20.181838    9351 main.go:141] libmachine: STDERR: 
	I0617 04:51:20.181851    9351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2
	I0617 04:51:20.181855    9351 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:20.181908    9351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:87:47:5e:ac:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/enable-default-cni-696000/disk.qcow2
	I0617 04:51:20.183678    9351 main.go:141] libmachine: STDOUT: 
	I0617 04:51:20.183695    9351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:20.183709    9351 client.go:171] duration metric: took 384.028417ms to LocalClient.Create
	I0617 04:51:22.185878    9351 start.go:128] duration metric: took 2.432104042s to createHost
	I0617 04:51:22.185959    9351 start.go:83] releasing machines lock for "enable-default-cni-696000", held for 2.432402041s
	W0617 04:51:22.186402    9351 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:22.202209    9351 out.go:177] 
	W0617 04:51:22.206258    9351 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:51:22.206286    9351 out.go:239] * 
	* 
	W0617 04:51:22.207751    9351 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:51:22.220148    9351 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.847677292s)

                                                
                                                
-- stdout --
	* [flannel-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-696000" primary control-plane node in "flannel-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:51:24.378336    9465 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:51:24.378479    9465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:24.378482    9465 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:24.378485    9465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:24.378621    9465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:51:24.379708    9465 out.go:298] Setting JSON to false
	I0617 04:51:24.395845    9465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4854,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:51:24.395906    9465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:51:24.401750    9465 out.go:177] * [flannel-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:51:24.409815    9465 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:51:24.409861    9465 notify.go:220] Checking for updates...
	I0617 04:51:24.413690    9465 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:51:24.416710    9465 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:51:24.419719    9465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:51:24.423676    9465 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:51:24.426721    9465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:51:24.430110    9465 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:51:24.430187    9465 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:51:24.430238    9465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:51:24.433734    9465 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:51:24.440757    9465 start.go:297] selected driver: qemu2
	I0617 04:51:24.440761    9465 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:51:24.440766    9465 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:51:24.443001    9465 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:51:24.446712    9465 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:51:24.450807    9465 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:51:24.450838    9465 cni.go:84] Creating CNI manager for "flannel"
	I0617 04:51:24.450844    9465 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0617 04:51:24.450876    9465 start.go:340] cluster config:
	{Name:flannel-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:flannel-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:51:24.455284    9465 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:51:24.463710    9465 out.go:177] * Starting "flannel-696000" primary control-plane node in "flannel-696000" cluster
	I0617 04:51:24.467691    9465 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:51:24.467709    9465 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:51:24.467718    9465 cache.go:56] Caching tarball of preloaded images
	I0617 04:51:24.467787    9465 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:51:24.467794    9465 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:51:24.467874    9465 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/flannel-696000/config.json ...
	I0617 04:51:24.467886    9465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/flannel-696000/config.json: {Name:mk0a487bd2e85e22d737381831394708baec0779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:51:24.468279    9465 start.go:360] acquireMachinesLock for flannel-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:24.468327    9465 start.go:364] duration metric: took 39.833µs to acquireMachinesLock for "flannel-696000"
	I0617 04:51:24.468339    9465 start.go:93] Provisioning new machine with config: &{Name:flannel-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:24.468369    9465 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:24.476731    9465 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:24.491784    9465 start.go:159] libmachine.API.Create for "flannel-696000" (driver="qemu2")
	I0617 04:51:24.491820    9465 client.go:168] LocalClient.Create starting
	I0617 04:51:24.491888    9465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:24.491917    9465 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:24.491930    9465 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:24.491979    9465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:24.492004    9465 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:24.492017    9465 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:24.492435    9465 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:24.647265    9465 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:24.720323    9465 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:24.720330    9465 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:24.720513    9465 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2
	I0617 04:51:24.733779    9465 main.go:141] libmachine: STDOUT: 
	I0617 04:51:24.733805    9465 main.go:141] libmachine: STDERR: 
	I0617 04:51:24.733851    9465 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2 +20000M
	I0617 04:51:24.745388    9465 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:24.745413    9465 main.go:141] libmachine: STDERR: 
	I0617 04:51:24.745435    9465 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2
	I0617 04:51:24.745442    9465 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:24.745475    9465 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:17:68:35:58:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2
	I0617 04:51:24.747235    9465 main.go:141] libmachine: STDOUT: 
	I0617 04:51:24.747256    9465 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:24.747276    9465 client.go:171] duration metric: took 255.45275ms to LocalClient.Create
	I0617 04:51:26.749471    9465 start.go:128] duration metric: took 2.281100625s to createHost
	I0617 04:51:26.749551    9465 start.go:83] releasing machines lock for "flannel-696000", held for 2.281239125s
	W0617 04:51:26.749641    9465 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:26.763849    9465 out.go:177] * Deleting "flannel-696000" in qemu2 ...
	W0617 04:51:26.789549    9465 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:26.789572    9465 start.go:728] Will try again in 5 seconds ...
	I0617 04:51:31.791742    9465 start.go:360] acquireMachinesLock for flannel-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:31.792227    9465 start.go:364] duration metric: took 402.167µs to acquireMachinesLock for "flannel-696000"
	I0617 04:51:31.792386    9465 start.go:93] Provisioning new machine with config: &{Name:flannel-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:31.792711    9465 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:31.801456    9465 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:31.845624    9465 start.go:159] libmachine.API.Create for "flannel-696000" (driver="qemu2")
	I0617 04:51:31.845691    9465 client.go:168] LocalClient.Create starting
	I0617 04:51:31.845874    9465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:31.845944    9465 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:31.845967    9465 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:31.846041    9465 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:31.846085    9465 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:31.846098    9465 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:31.846552    9465 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:32.009054    9465 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:32.140903    9465 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:32.140910    9465 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:32.141091    9465 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2
	I0617 04:51:32.154083    9465 main.go:141] libmachine: STDOUT: 
	I0617 04:51:32.154104    9465 main.go:141] libmachine: STDERR: 
	I0617 04:51:32.154173    9465 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2 +20000M
	I0617 04:51:32.165763    9465 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:32.165780    9465 main.go:141] libmachine: STDERR: 
	I0617 04:51:32.165793    9465 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2
	I0617 04:51:32.165798    9465 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:32.165840    9465 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e0:11:e3:f5:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/flannel-696000/disk.qcow2
	I0617 04:51:32.167631    9465 main.go:141] libmachine: STDOUT: 
	I0617 04:51:32.167645    9465 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:32.167659    9465 client.go:171] duration metric: took 321.945917ms to LocalClient.Create
	I0617 04:51:34.169697    9465 start.go:128] duration metric: took 2.376972167s to createHost
	I0617 04:51:34.169719    9465 start.go:83] releasing machines lock for "flannel-696000", held for 2.377497666s
	W0617 04:51:34.169800    9465 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:34.172999    9465 out.go:177] 
	W0617 04:51:34.178037    9465 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:51:34.178044    9465 out.go:239] * 
	* 
	W0617 04:51:34.178541    9465 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:51:34.190003    9465 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.828991375s)

                                                
                                                
-- stdout --
	* [bridge-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-696000" primary control-plane node in "bridge-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:51:36.541833    9585 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:51:36.541960    9585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:36.541963    9585 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:36.541966    9585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:36.542100    9585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:51:36.543158    9585 out.go:298] Setting JSON to false
	I0617 04:51:36.559968    9585 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4866,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:51:36.560036    9585 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:51:36.565187    9585 out.go:177] * [bridge-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:51:36.572165    9585 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:51:36.572267    9585 notify.go:220] Checking for updates...
	I0617 04:51:36.580175    9585 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:51:36.583174    9585 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:51:36.587179    9585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:51:36.590294    9585 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:51:36.593150    9585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:51:36.596458    9585 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:51:36.596523    9585 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:51:36.596581    9585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:51:36.601198    9585 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:51:36.608140    9585 start.go:297] selected driver: qemu2
	I0617 04:51:36.608144    9585 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:51:36.608149    9585 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:51:36.610330    9585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:51:36.614159    9585 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:51:36.617184    9585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:51:36.617198    9585 cni.go:84] Creating CNI manager for "bridge"
	I0617 04:51:36.617201    9585 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:51:36.617230    9585 start.go:340] cluster config:
	{Name:bridge-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:51:36.621367    9585 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:51:36.629027    9585 out.go:177] * Starting "bridge-696000" primary control-plane node in "bridge-696000" cluster
	I0617 04:51:36.633138    9585 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:51:36.633149    9585 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:51:36.633157    9585 cache.go:56] Caching tarball of preloaded images
	I0617 04:51:36.633203    9585 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:51:36.633207    9585 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:51:36.633262    9585 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/bridge-696000/config.json ...
	I0617 04:51:36.633271    9585 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/bridge-696000/config.json: {Name:mk1b386014e432e0d780fef9bf59fae129d84ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:51:36.633476    9585 start.go:360] acquireMachinesLock for bridge-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:36.633507    9585 start.go:364] duration metric: took 25.541µs to acquireMachinesLock for "bridge-696000"
	I0617 04:51:36.633517    9585 start.go:93] Provisioning new machine with config: &{Name:bridge-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:36.633543    9585 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:36.641163    9585 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:36.656201    9585 start.go:159] libmachine.API.Create for "bridge-696000" (driver="qemu2")
	I0617 04:51:36.656230    9585 client.go:168] LocalClient.Create starting
	I0617 04:51:36.656286    9585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:36.656315    9585 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:36.656325    9585 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:36.656372    9585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:36.656394    9585 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:36.656401    9585 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:36.656738    9585 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:36.807253    9585 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:36.907254    9585 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:36.907260    9585 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:36.907460    9585 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2
	I0617 04:51:36.919935    9585 main.go:141] libmachine: STDOUT: 
	I0617 04:51:36.919955    9585 main.go:141] libmachine: STDERR: 
	I0617 04:51:36.920016    9585 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2 +20000M
	I0617 04:51:36.930879    9585 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:36.930896    9585 main.go:141] libmachine: STDERR: 
	I0617 04:51:36.930912    9585 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2
	I0617 04:51:36.930916    9585 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:36.930946    9585 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:27:1a:7b:d4:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2
	I0617 04:51:36.932712    9585 main.go:141] libmachine: STDOUT: 
	I0617 04:51:36.932728    9585 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:36.932749    9585 client.go:171] duration metric: took 276.515708ms to LocalClient.Create
	I0617 04:51:38.935061    9585 start.go:128] duration metric: took 2.30150975s to createHost
	I0617 04:51:38.935138    9585 start.go:83] releasing machines lock for "bridge-696000", held for 2.30164575s
	W0617 04:51:38.935214    9585 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:38.954685    9585 out.go:177] * Deleting "bridge-696000" in qemu2 ...
	W0617 04:51:38.985114    9585 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:38.985150    9585 start.go:728] Will try again in 5 seconds ...
	I0617 04:51:43.987323    9585 start.go:360] acquireMachinesLock for bridge-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:43.987835    9585 start.go:364] duration metric: took 415.667µs to acquireMachinesLock for "bridge-696000"
	I0617 04:51:43.987961    9585 start.go:93] Provisioning new machine with config: &{Name:bridge-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:43.988348    9585 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:43.997838    9585 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:44.040113    9585 start.go:159] libmachine.API.Create for "bridge-696000" (driver="qemu2")
	I0617 04:51:44.040159    9585 client.go:168] LocalClient.Create starting
	I0617 04:51:44.040274    9585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:44.040336    9585 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:44.040366    9585 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:44.040422    9585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:44.040462    9585 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:44.040499    9585 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:44.041015    9585 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:44.202833    9585 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:44.278698    9585 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:44.278704    9585 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:44.278866    9585 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2
	I0617 04:51:44.291529    9585 main.go:141] libmachine: STDOUT: 
	I0617 04:51:44.291554    9585 main.go:141] libmachine: STDERR: 
	I0617 04:51:44.291613    9585 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2 +20000M
	I0617 04:51:44.303053    9585 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:44.303067    9585 main.go:141] libmachine: STDERR: 
	I0617 04:51:44.303082    9585 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2
	I0617 04:51:44.303087    9585 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:44.303120    9585 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:79:dc:b8:f0:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/bridge-696000/disk.qcow2
	I0617 04:51:44.304956    9585 main.go:141] libmachine: STDOUT: 
	I0617 04:51:44.304971    9585 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:44.304984    9585 client.go:171] duration metric: took 264.822917ms to LocalClient.Create
	I0617 04:51:46.307042    9585 start.go:128] duration metric: took 2.318697292s to createHost
	I0617 04:51:46.307067    9585 start.go:83] releasing machines lock for "bridge-696000", held for 2.319243708s
	W0617 04:51:46.307197    9585 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:46.318486    9585 out.go:177] 
	W0617 04:51:46.323494    9585 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:51:46.323499    9585 out.go:239] * 
	* 
	W0617 04:51:46.324019    9585 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:51:46.334459    9585 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-696000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.894109208s)

                                                
                                                
-- stdout --
	* [kubenet-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-696000" primary control-plane node in "kubenet-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:51:48.537953    9698 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:51:48.538079    9698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:48.538084    9698 out.go:304] Setting ErrFile to fd 2...
	I0617 04:51:48.538087    9698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:51:48.538215    9698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:51:48.539293    9698 out.go:298] Setting JSON to false
	I0617 04:51:48.556192    9698 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4878,"bootTime":1718620230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:51:48.556268    9698 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:51:48.561618    9698 out.go:177] * [kubenet-696000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:51:48.569682    9698 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:51:48.572760    9698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:51:48.569751    9698 notify.go:220] Checking for updates...
	I0617 04:51:48.576623    9698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:51:48.579686    9698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:51:48.583644    9698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:51:48.586646    9698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:51:48.590003    9698 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:51:48.590068    9698 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:51:48.590117    9698 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:51:48.594618    9698 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:51:48.601692    9698 start.go:297] selected driver: qemu2
	I0617 04:51:48.601699    9698 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:51:48.601708    9698 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:51:48.603789    9698 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:51:48.606626    9698 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:51:48.610744    9698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:51:48.610803    9698 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0617 04:51:48.610832    9698 start.go:340] cluster config:
	{Name:kubenet-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubenet-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:51:48.615160    9698 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:51:48.623608    9698 out.go:177] * Starting "kubenet-696000" primary control-plane node in "kubenet-696000" cluster
	I0617 04:51:48.627692    9698 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:51:48.627705    9698 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:51:48.627719    9698 cache.go:56] Caching tarball of preloaded images
	I0617 04:51:48.627771    9698 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:51:48.627776    9698 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:51:48.627846    9698 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/kubenet-696000/config.json ...
	I0617 04:51:48.627862    9698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/kubenet-696000/config.json: {Name:mk092231786708d7a63ea72eceec1072680a50df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:51:48.628126    9698 start.go:360] acquireMachinesLock for kubenet-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:48.628168    9698 start.go:364] duration metric: took 34.25µs to acquireMachinesLock for "kubenet-696000"
	I0617 04:51:48.628181    9698 start.go:93] Provisioning new machine with config: &{Name:kubenet-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:48.628214    9698 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:48.635644    9698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:48.652315    9698 start.go:159] libmachine.API.Create for "kubenet-696000" (driver="qemu2")
	I0617 04:51:48.652344    9698 client.go:168] LocalClient.Create starting
	I0617 04:51:48.652407    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:48.652440    9698 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:48.652453    9698 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:48.652500    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:48.652523    9698 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:48.652531    9698 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:48.652910    9698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:48.808246    9698 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:49.009388    9698 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:49.009398    9698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:49.009602    9698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2
	I0617 04:51:49.022689    9698 main.go:141] libmachine: STDOUT: 
	I0617 04:51:49.022715    9698 main.go:141] libmachine: STDERR: 
	I0617 04:51:49.022769    9698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2 +20000M
	I0617 04:51:49.033860    9698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:49.033877    9698 main.go:141] libmachine: STDERR: 
	I0617 04:51:49.033894    9698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2
	I0617 04:51:49.033906    9698 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:49.033933    9698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:9b:f1:f6:1e:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2
	I0617 04:51:49.035683    9698 main.go:141] libmachine: STDOUT: 
	I0617 04:51:49.035698    9698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:49.035720    9698 client.go:171] duration metric: took 383.373833ms to LocalClient.Create
	I0617 04:51:51.038002    9698 start.go:128] duration metric: took 2.409782292s to createHost
	I0617 04:51:51.038081    9698 start.go:83] releasing machines lock for "kubenet-696000", held for 2.409927s
	W0617 04:51:51.038156    9698 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:51.050556    9698 out.go:177] * Deleting "kubenet-696000" in qemu2 ...
	W0617 04:51:51.082731    9698 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:51.082769    9698 start.go:728] Will try again in 5 seconds ...
	I0617 04:51:56.084851    9698 start.go:360] acquireMachinesLock for kubenet-696000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:51:56.085123    9698 start.go:364] duration metric: took 232.167µs to acquireMachinesLock for "kubenet-696000"
	I0617 04:51:56.085160    9698 start.go:93] Provisioning new machine with config: &{Name:kubenet-696000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:51:56.085297    9698 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:51:56.094088    9698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0617 04:51:56.129272    9698 start.go:159] libmachine.API.Create for "kubenet-696000" (driver="qemu2")
	I0617 04:51:56.129324    9698 client.go:168] LocalClient.Create starting
	I0617 04:51:56.129415    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:51:56.129483    9698 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:56.129498    9698 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:56.129566    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:51:56.129605    9698 main.go:141] libmachine: Decoding PEM data...
	I0617 04:51:56.129617    9698 main.go:141] libmachine: Parsing certificate...
	I0617 04:51:56.130113    9698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:51:56.289499    9698 main.go:141] libmachine: Creating SSH key...
	I0617 04:51:56.335133    9698 main.go:141] libmachine: Creating Disk image...
	I0617 04:51:56.335138    9698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:51:56.335305    9698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2
	I0617 04:51:56.347993    9698 main.go:141] libmachine: STDOUT: 
	I0617 04:51:56.348020    9698 main.go:141] libmachine: STDERR: 
	I0617 04:51:56.348074    9698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2 +20000M
	I0617 04:51:56.359233    9698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:51:56.359251    9698 main.go:141] libmachine: STDERR: 
	I0617 04:51:56.359263    9698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2
	I0617 04:51:56.359269    9698 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:51:56.359309    9698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:26:cf:7f:9f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/kubenet-696000/disk.qcow2
	I0617 04:51:56.361090    9698 main.go:141] libmachine: STDOUT: 
	I0617 04:51:56.361106    9698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:51:56.361119    9698 client.go:171] duration metric: took 231.793125ms to LocalClient.Create
	I0617 04:51:58.363298    9698 start.go:128] duration metric: took 2.277990542s to createHost
	I0617 04:51:58.363371    9698 start.go:83] releasing machines lock for "kubenet-696000", held for 2.278254916s
	W0617 04:51:58.363759    9698 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:51:58.377506    9698 out.go:177] 
	W0617 04:51:58.380607    9698 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:51:58.380631    9698 out.go:239] * 
	* 
	W0617 04:51:58.383109    9698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:51:58.393550    9698 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-013000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-013000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.875486333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-013000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-013000" primary control-plane node in "old-k8s-version-013000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:00.631368    9816 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:00.631932    9816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:00.631945    9816 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:00.631953    9816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:00.632316    9816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:00.633988    9816 out.go:298] Setting JSON to false
	I0617 04:52:00.650765    9816 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4890,"bootTime":1718620230,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:00.650820    9816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:00.658005    9816 out.go:177] * [old-k8s-version-013000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:00.669864    9816 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:00.665995    9816 notify.go:220] Checking for updates...
	I0617 04:52:00.676949    9816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:00.680928    9816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:00.683964    9816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:00.686997    9816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:00.689957    9816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:00.693332    9816 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:00.693411    9816 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:52:00.693457    9816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:00.697980    9816 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:52:00.705664    9816 start.go:297] selected driver: qemu2
	I0617 04:52:00.705672    9816 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:52:00.705678    9816 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:00.708087    9816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:52:00.712187    9816 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:52:00.714997    9816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:00.715036    9816 cni.go:84] Creating CNI manager for ""
	I0617 04:52:00.715043    9816 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0617 04:52:00.715076    9816 start.go:340] cluster config:
	{Name:old-k8s-version-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:00.719269    9816 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:00.726993    9816 out.go:177] * Starting "old-k8s-version-013000" primary control-plane node in "old-k8s-version-013000" cluster
	I0617 04:52:00.730969    9816 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:52:00.730985    9816 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0617 04:52:00.730996    9816 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:00.731067    9816 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:00.731072    9816 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0617 04:52:00.731132    9816 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/old-k8s-version-013000/config.json ...
	I0617 04:52:00.731142    9816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/old-k8s-version-013000/config.json: {Name:mkee5c9e36f57a389a3d661fe971d552456ee690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:52:00.731511    9816 start.go:360] acquireMachinesLock for old-k8s-version-013000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:00.731544    9816 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "old-k8s-version-013000"
	I0617 04:52:00.731553    9816 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:00.731581    9816 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:00.735791    9816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:00.750952    9816 start.go:159] libmachine.API.Create for "old-k8s-version-013000" (driver="qemu2")
	I0617 04:52:00.750991    9816 client.go:168] LocalClient.Create starting
	I0617 04:52:00.751062    9816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:00.751092    9816 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:00.751104    9816 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:00.751148    9816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:00.751171    9816 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:00.751179    9816 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:00.751640    9816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:00.903723    9816 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:01.092873    9816 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:01.092883    9816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:01.093089    9816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:01.106185    9816 main.go:141] libmachine: STDOUT: 
	I0617 04:52:01.106204    9816 main.go:141] libmachine: STDERR: 
	I0617 04:52:01.106270    9816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2 +20000M
	I0617 04:52:01.117358    9816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:01.117374    9816 main.go:141] libmachine: STDERR: 
	I0617 04:52:01.117388    9816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:01.117393    9816 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:01.117426    9816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:7e:3b:6d:31:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:01.119138    9816 main.go:141] libmachine: STDOUT: 
	I0617 04:52:01.119152    9816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:01.119171    9816 client.go:171] duration metric: took 368.176375ms to LocalClient.Create
	I0617 04:52:03.121263    9816 start.go:128] duration metric: took 2.38969425s to createHost
	I0617 04:52:03.121293    9816 start.go:83] releasing machines lock for "old-k8s-version-013000", held for 2.389768334s
	W0617 04:52:03.121347    9816 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:03.134832    9816 out.go:177] * Deleting "old-k8s-version-013000" in qemu2 ...
	W0617 04:52:03.161259    9816 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:03.161274    9816 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:08.163390    9816 start.go:360] acquireMachinesLock for old-k8s-version-013000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:08.163677    9816 start.go:364] duration metric: took 223.75µs to acquireMachinesLock for "old-k8s-version-013000"
	I0617 04:52:08.163725    9816 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:08.163871    9816 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:08.173257    9816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:08.205353    9816 start.go:159] libmachine.API.Create for "old-k8s-version-013000" (driver="qemu2")
	I0617 04:52:08.205408    9816 client.go:168] LocalClient.Create starting
	I0617 04:52:08.205527    9816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:08.205581    9816 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:08.205595    9816 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:08.205645    9816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:08.205682    9816 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:08.205693    9816 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:08.206077    9816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:08.364113    9816 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:08.407039    9816 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:08.407044    9816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:08.407212    9816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:08.420055    9816 main.go:141] libmachine: STDOUT: 
	I0617 04:52:08.420076    9816 main.go:141] libmachine: STDERR: 
	I0617 04:52:08.420132    9816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2 +20000M
	I0617 04:52:08.431174    9816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:08.431197    9816 main.go:141] libmachine: STDERR: 
	I0617 04:52:08.431211    9816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:08.431216    9816 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:08.431260    9816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:94:4c:e8:c4:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:08.433102    9816 main.go:141] libmachine: STDOUT: 
	I0617 04:52:08.433119    9816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:08.433135    9816 client.go:171] duration metric: took 227.724583ms to LocalClient.Create
	I0617 04:52:10.435381    9816 start.go:128] duration metric: took 2.271471958s to createHost
	I0617 04:52:10.435485    9816 start.go:83] releasing machines lock for "old-k8s-version-013000", held for 2.271814s
	W0617 04:52:10.435811    9816 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:10.445226    9816 out.go:177] 
	W0617 04:52:10.452338    9816 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:10.452394    9816 out.go:239] * 
	* 
	W0617 04:52:10.455176    9816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:10.465252    9816 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-013000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (65.115083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-013000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-013000 create -f testdata/busybox.yaml: exit status 1 (30.680416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-013000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-013000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (29.712417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (29.517042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-013000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-013000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-013000 describe deploy/metrics-server -n kube-system: exit status 1 (28.55575ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-013000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-013000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (30.189916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-013000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-013000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.194596291s)

                                                
                                                
-- stdout --
	* [old-k8s-version-013000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-013000" primary control-plane node in "old-k8s-version-013000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:14.210989    9875 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:14.211117    9875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:14.211120    9875 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:14.211123    9875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:14.211249    9875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:14.212350    9875 out.go:298] Setting JSON to false
	I0617 04:52:14.229509    9875 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4904,"bootTime":1718620230,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:14.229608    9875 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:14.234259    9875 out.go:177] * [old-k8s-version-013000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:14.242275    9875 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:14.242323    9875 notify.go:220] Checking for updates...
	I0617 04:52:14.249238    9875 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:14.252267    9875 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:14.255321    9875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:14.256754    9875 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:14.260228    9875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:14.263522    9875 config.go:182] Loaded profile config "old-k8s-version-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0617 04:52:14.267272    9875 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0617 04:52:14.270246    9875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:14.274269    9875 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:52:14.281270    9875 start.go:297] selected driver: qemu2
	I0617 04:52:14.281276    9875 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:14.281340    9875 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:14.283587    9875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:14.283623    9875 cni.go:84] Creating CNI manager for ""
	I0617 04:52:14.283630    9875 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0617 04:52:14.283653    9875 start.go:340] cluster config:
	{Name:old-k8s-version-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:14.288114    9875 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:14.297477    9875 out.go:177] * Starting "old-k8s-version-013000" primary control-plane node in "old-k8s-version-013000" cluster
	I0617 04:52:14.302263    9875 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:52:14.302285    9875 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0617 04:52:14.302296    9875 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:14.302355    9875 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:14.302360    9875 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0617 04:52:14.302415    9875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/old-k8s-version-013000/config.json ...
	I0617 04:52:14.302908    9875 start.go:360] acquireMachinesLock for old-k8s-version-013000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:14.302941    9875 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "old-k8s-version-013000"
	I0617 04:52:14.302950    9875 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:14.302954    9875 fix.go:54] fixHost starting: 
	I0617 04:52:14.303068    9875 fix.go:112] recreateIfNeeded on old-k8s-version-013000: state=Stopped err=<nil>
	W0617 04:52:14.303077    9875 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:14.307222    9875 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-013000" ...
	I0617 04:52:14.315340    9875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:94:4c:e8:c4:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:14.317296    9875 main.go:141] libmachine: STDOUT: 
	I0617 04:52:14.317308    9875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:14.317334    9875 fix.go:56] duration metric: took 14.378667ms for fixHost
	I0617 04:52:14.317341    9875 start.go:83] releasing machines lock for "old-k8s-version-013000", held for 14.392375ms
	W0617 04:52:14.317349    9875 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:14.317385    9875 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:14.317389    9875 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:19.318064    9875 start.go:360] acquireMachinesLock for old-k8s-version-013000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:19.318523    9875 start.go:364] duration metric: took 379.333µs to acquireMachinesLock for "old-k8s-version-013000"
	I0617 04:52:19.318677    9875 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:19.318694    9875 fix.go:54] fixHost starting: 
	I0617 04:52:19.319289    9875 fix.go:112] recreateIfNeeded on old-k8s-version-013000: state=Stopped err=<nil>
	W0617 04:52:19.319311    9875 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:19.328828    9875 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-013000" ...
	I0617 04:52:19.332891    9875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:94:4c:e8:c4:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/old-k8s-version-013000/disk.qcow2
	I0617 04:52:19.341517    9875 main.go:141] libmachine: STDOUT: 
	I0617 04:52:19.341578    9875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:19.341692    9875 fix.go:56] duration metric: took 22.996916ms for fixHost
	I0617 04:52:19.341710    9875 start.go:83] releasing machines lock for "old-k8s-version-013000", held for 23.167083ms
	W0617 04:52:19.341942    9875 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:19.349801    9875 out.go:177] 
	W0617 04:52:19.353831    9875 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:19.353857    9875 out.go:239] * 
	* 
	W0617 04:52:19.355908    9875 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:19.362784    9875 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-013000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (62.97675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-013000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (32.112917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-013000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-013000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-013000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.711875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-013000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-013000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (29.488417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-013000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (29.314875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-013000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-013000 --alsologtostderr -v=1: exit status 83 (42.375292ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-013000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-013000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:19.628655    9894 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:19.629725    9894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:19.629729    9894 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:19.629732    9894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:19.629857    9894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:19.630075    9894 out.go:298] Setting JSON to false
	I0617 04:52:19.630085    9894 mustload.go:65] Loading cluster: old-k8s-version-013000
	I0617 04:52:19.630257    9894 config.go:182] Loaded profile config "old-k8s-version-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0617 04:52:19.635150    9894 out.go:177] * The control-plane node old-k8s-version-013000 host is not running: state=Stopped
	I0617 04:52:19.639120    9894 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-013000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-013000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (29.067625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (30.260166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-013000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.829344834s)

                                                
                                                
-- stdout --
	* [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-828000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:20.094226    9917 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:20.094369    9917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:20.094372    9917 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:20.094374    9917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:20.094497    9917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:20.095545    9917 out.go:298] Setting JSON to false
	I0617 04:52:20.111697    9917 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4910,"bootTime":1718620230,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:20.111762    9917 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:20.116907    9917 out.go:177] * [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:20.123905    9917 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:20.126841    9917 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:20.123973    9917 notify.go:220] Checking for updates...
	I0617 04:52:20.132852    9917 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:20.134356    9917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:20.137827    9917 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:20.144787    9917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:20.148185    9917 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:20.148253    9917 config.go:182] Loaded profile config "stopped-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0617 04:52:20.148292    9917 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:20.151886    9917 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:52:20.158844    9917 start.go:297] selected driver: qemu2
	I0617 04:52:20.158849    9917 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:52:20.158855    9917 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:20.160960    9917 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:52:20.164893    9917 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:52:20.168006    9917 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:20.168048    9917 cni.go:84] Creating CNI manager for ""
	I0617 04:52:20.168057    9917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:20.168061    9917 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:52:20.168089    9917 start.go:340] cluster config:
	{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:20.172641    9917 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.180889    9917 out.go:177] * Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	I0617 04:52:20.184811    9917 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:20.184882    9917 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/no-preload-828000/config.json ...
	I0617 04:52:20.184888    9917 cache.go:107] acquiring lock: {Name:mk659eb9e8657f0d926428caab9cd1d5e2e37549 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.184899    9917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/no-preload-828000/config.json: {Name:mk9de01120f1ea0fe8723462eba9e8e21c11c0a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:52:20.184917    9917 cache.go:107] acquiring lock: {Name:mkb753e956c8bc4b98b6ef27b7587beb55d2e378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.184972    9917 cache.go:107] acquiring lock: {Name:mka462cb12599337d82ab6da925ea3122d4f3fe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.185061    9917 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 04:52:20.185107    9917 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0617 04:52:20.185102    9917 cache.go:107] acquiring lock: {Name:mk26e395759155200bf58d1c2651943e2f1c2ab9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.185123    9917 cache.go:107] acquiring lock: {Name:mk9619555ebe4be08621ceba1e5f86dba9db1fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.185116    9917 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 227.875µs
	I0617 04:52:20.185185    9917 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0617 04:52:20.185059    9917 cache.go:107] acquiring lock: {Name:mkb9ae1daa20dbb04b5a86dca1294c22681e1cf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.185165    9917 cache.go:107] acquiring lock: {Name:mk649656c858ad16a0a63bec3e75b357e7bcb9d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.185233    9917 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:20.185237    9917 cache.go:107] acquiring lock: {Name:mkd93a44cb5cf44da933f43038ca18763d6369b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:20.185301    9917 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 04:52:20.185334    9917 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 04:52:20.185348    9917 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 04:52:20.185415    9917 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 04:52:20.185416    9917 start.go:364] duration metric: took 177.167µs to acquireMachinesLock for "no-preload-828000"
	I0617 04:52:20.185436    9917 start.go:93] Provisioning new machine with config: &{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:20.185480    9917 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:20.185517    9917 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0617 04:52:20.185524    9917 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 04:52:20.193859    9917 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:20.200608    9917 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0617 04:52:20.201162    9917 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0617 04:52:20.201171    9917 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0617 04:52:20.201170    9917 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 04:52:20.201223    9917 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0617 04:52:20.201253    9917 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0617 04:52:20.201281    9917 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0617 04:52:20.209881    9917 start.go:159] libmachine.API.Create for "no-preload-828000" (driver="qemu2")
	I0617 04:52:20.209902    9917 client.go:168] LocalClient.Create starting
	I0617 04:52:20.209976    9917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:20.210004    9917 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:20.210020    9917 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:20.210062    9917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:20.210085    9917 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:20.210091    9917 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:20.210466    9917 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:20.369083    9917 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:20.441298    9917 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:20.441326    9917 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:20.441528    9917 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:20.454818    9917 main.go:141] libmachine: STDOUT: 
	I0617 04:52:20.454839    9917 main.go:141] libmachine: STDERR: 
	I0617 04:52:20.454895    9917 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2 +20000M
	I0617 04:52:20.467180    9917 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:20.467199    9917 main.go:141] libmachine: STDERR: 
	I0617 04:52:20.467219    9917 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:20.467223    9917 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:20.467260    9917 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:94:ba:68:d3:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:20.469078    9917 main.go:141] libmachine: STDOUT: 
	I0617 04:52:20.469093    9917 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:20.469112    9917 client.go:171] duration metric: took 259.207209ms to LocalClient.Create
	I0617 04:52:21.078252    9917 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0617 04:52:21.081293    9917 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0617 04:52:21.139841    9917 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0617 04:52:21.142856    9917 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0617 04:52:21.263101    9917 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0617 04:52:21.282332    9917 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0617 04:52:21.292488    9917 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0617 04:52:21.392770    9917 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0617 04:52:21.392808    9917 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.207678125s
	I0617 04:52:21.392829    9917 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0617 04:52:22.473183    9917 start.go:128] duration metric: took 2.28770575s to createHost
	I0617 04:52:22.473198    9917 start.go:83] releasing machines lock for "no-preload-828000", held for 2.287801583s
	W0617 04:52:22.473209    9917 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:22.484897    9917 out.go:177] * Deleting "no-preload-828000" in qemu2 ...
	W0617 04:52:22.500668    9917 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:22.500681    9917 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:24.032439    9917 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0617 04:52:24.032539    9917 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 3.847532292s
	I0617 04:52:24.032571    9917 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0617 04:52:25.253079    9917 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0617 04:52:25.253142    9917 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 5.068225417s
	I0617 04:52:25.253172    9917 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0617 04:52:25.377397    9917 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0617 04:52:25.377444    9917 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 5.192580167s
	I0617 04:52:25.377470    9917 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0617 04:52:25.424171    9917 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0617 04:52:25.424207    9917 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 5.239036666s
	I0617 04:52:25.424284    9917 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0617 04:52:27.176225    9917 cache.go:157] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0617 04:52:27.176279    9917 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 6.991312125s
	I0617 04:52:27.176306    9917 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0617 04:52:27.501233    9917 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:27.501685    9917 start.go:364] duration metric: took 376.375µs to acquireMachinesLock for "no-preload-828000"
	I0617 04:52:27.501846    9917 start.go:93] Provisioning new machine with config: &{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:27.502122    9917 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:27.514683    9917 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:27.564660    9917 start.go:159] libmachine.API.Create for "no-preload-828000" (driver="qemu2")
	I0617 04:52:27.564719    9917 client.go:168] LocalClient.Create starting
	I0617 04:52:27.564832    9917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:27.564904    9917 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:27.564928    9917 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:27.564998    9917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:27.565042    9917 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:27.565058    9917 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:27.565541    9917 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:27.729854    9917 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:27.817563    9917 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:27.817569    9917 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:27.817746    9917 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:27.830466    9917 main.go:141] libmachine: STDOUT: 
	I0617 04:52:27.830484    9917 main.go:141] libmachine: STDERR: 
	I0617 04:52:27.830540    9917 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2 +20000M
	I0617 04:52:27.841736    9917 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:27.841752    9917 main.go:141] libmachine: STDERR: 
	I0617 04:52:27.841763    9917 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:27.841767    9917 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:27.841804    9917 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:5d:85:83:f1:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:27.843590    9917 main.go:141] libmachine: STDOUT: 
	I0617 04:52:27.843605    9917 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:27.843619    9917 client.go:171] duration metric: took 278.897958ms to LocalClient.Create
	I0617 04:52:29.843981    9917 start.go:128] duration metric: took 2.34182275s to createHost
	I0617 04:52:29.844049    9917 start.go:83] releasing machines lock for "no-preload-828000", held for 2.342361208s
	W0617 04:52:29.844453    9917 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:29.857112    9917 out.go:177] 
	W0617 04:52:29.868081    9917 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:29.868121    9917 out.go:239] * 
	* 
	W0617 04:52:29.870915    9917 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:29.882069    9917 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (51.282333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-769000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-769000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.951471s)

                                                
                                                
-- stdout --
	* [embed-certs-769000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-769000" primary control-plane node in "embed-certs-769000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:22.367947    9959 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:22.368085    9959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:22.368088    9959 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:22.368090    9959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:22.368220    9959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:22.369325    9959 out.go:298] Setting JSON to false
	I0617 04:52:22.385933    9959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4912,"bootTime":1718620230,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:22.386029    9959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:22.391021    9959 out.go:177] * [embed-certs-769000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:22.402864    9959 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:22.397954    9959 notify.go:220] Checking for updates...
	I0617 04:52:22.410938    9959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:22.416900    9959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:22.424776    9959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:22.431864    9959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:22.438888    9959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:22.443207    9959 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:22.443275    9959 config.go:182] Loaded profile config "no-preload-828000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:22.443331    9959 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:22.446800    9959 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:52:22.453704    9959 start.go:297] selected driver: qemu2
	I0617 04:52:22.453713    9959 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:52:22.453719    9959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:22.456011    9959 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:52:22.459908    9959 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:52:22.463984    9959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:22.464006    9959 cni.go:84] Creating CNI manager for ""
	I0617 04:52:22.464013    9959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:22.464017    9959 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:52:22.464045    9959 start.go:340] cluster config:
	{Name:embed-certs-769000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-769000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:22.468701    9959 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:22.476853    9959 out.go:177] * Starting "embed-certs-769000" primary control-plane node in "embed-certs-769000" cluster
	I0617 04:52:22.484912    9959 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:22.484930    9959 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:52:22.484941    9959 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:22.485037    9959 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:22.485044    9959 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:52:22.485126    9959 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/embed-certs-769000/config.json ...
	I0617 04:52:22.485137    9959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/embed-certs-769000/config.json: {Name:mk55f9f7aed36ca16f66798b714c184b6909369d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:52:22.485422    9959 start.go:360] acquireMachinesLock for embed-certs-769000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:22.485460    9959 start.go:364] duration metric: took 31.333µs to acquireMachinesLock for "embed-certs-769000"
	I0617 04:52:22.485470    9959 start.go:93] Provisioning new machine with config: &{Name:embed-certs-769000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-769000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:22.485505    9959 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:22.495873    9959 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:22.515045    9959 start.go:159] libmachine.API.Create for "embed-certs-769000" (driver="qemu2")
	I0617 04:52:22.515072    9959 client.go:168] LocalClient.Create starting
	I0617 04:52:22.515145    9959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:22.515177    9959 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:22.515188    9959 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:22.515229    9959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:22.515254    9959 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:22.515266    9959 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:22.515657    9959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:22.671146    9959 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:22.775620    9959 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:22.775627    9959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:22.775799    9959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:22.788502    9959 main.go:141] libmachine: STDOUT: 
	I0617 04:52:22.788522    9959 main.go:141] libmachine: STDERR: 
	I0617 04:52:22.788568    9959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2 +20000M
	I0617 04:52:22.800005    9959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:22.800020    9959 main.go:141] libmachine: STDERR: 
	I0617 04:52:22.800040    9959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:22.800043    9959 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:22.800083    9959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:c0:d8:42:b1:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:22.801742    9959 main.go:141] libmachine: STDOUT: 
	I0617 04:52:22.801757    9959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:22.801774    9959 client.go:171] duration metric: took 286.6995ms to LocalClient.Create
	I0617 04:52:24.804075    9959 start.go:128] duration metric: took 2.318532458s to createHost
	I0617 04:52:24.804165    9959 start.go:83] releasing machines lock for "embed-certs-769000", held for 2.3187185s
	W0617 04:52:24.804223    9959 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:24.814394    9959 out.go:177] * Deleting "embed-certs-769000" in qemu2 ...
	W0617 04:52:24.853124    9959 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:24.853162    9959 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:29.855245    9959 start.go:360] acquireMachinesLock for embed-certs-769000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:29.855709    9959 start.go:364] duration metric: took 364.208µs to acquireMachinesLock for "embed-certs-769000"
	I0617 04:52:29.855856    9959 start.go:93] Provisioning new machine with config: &{Name:embed-certs-769000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-769000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:29.856183    9959 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:29.863921    9959 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:29.914424    9959 start.go:159] libmachine.API.Create for "embed-certs-769000" (driver="qemu2")
	I0617 04:52:29.914483    9959 client.go:168] LocalClient.Create starting
	I0617 04:52:29.914568    9959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:29.914611    9959 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:29.914626    9959 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:29.914686    9959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:29.914715    9959 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:29.914726    9959 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:29.915239    9959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:30.087679    9959 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:30.218170    9959 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:30.218180    9959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:30.221565    9959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:30.233823    9959 main.go:141] libmachine: STDOUT: 
	I0617 04:52:30.233846    9959 main.go:141] libmachine: STDERR: 
	I0617 04:52:30.233894    9959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2 +20000M
	I0617 04:52:30.244820    9959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:30.244838    9959 main.go:141] libmachine: STDERR: 
	I0617 04:52:30.244850    9959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:30.244854    9959 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:30.244894    9959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:3a:cb:f6:84:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:30.246488    9959 main.go:141] libmachine: STDOUT: 
	I0617 04:52:30.246503    9959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:30.246517    9959 client.go:171] duration metric: took 332.031958ms to LocalClient.Create
	I0617 04:52:32.248712    9959 start.go:128] duration metric: took 2.392518167s to createHost
	I0617 04:52:32.248819    9959 start.go:83] releasing machines lock for "embed-certs-769000", held for 2.39311225s
	W0617 04:52:32.249156    9959 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:32.258870    9959 out.go:177] 
	W0617 04:52:32.263061    9959 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:32.263097    9959 out.go:239] * 
	* 
	W0617 04:52:32.265189    9959 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:32.276912    9959 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-769000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (65.039583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-828000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-828000 create -f testdata/busybox.yaml: exit status 1 (31.172125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-828000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-828000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (33.655541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (33.600959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-828000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-828000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-828000 describe deploy/metrics-server -n kube-system: exit status 1 (27.320042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-828000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-828000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (30.196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-769000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-769000 create -f testdata/busybox.yaml: exit status 1 (30.148792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-769000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-769000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (29.924459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (28.803917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-769000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-769000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-769000 describe deploy/metrics-server -n kube-system: exit status 1 (26.504375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-769000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-769000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (29.158583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.204937209s)

                                                
                                                
-- stdout --
	* [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	* Restarting existing qemu2 VM for "no-preload-828000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-828000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:33.276380   10035 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:33.276500   10035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:33.276504   10035 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:33.276510   10035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:33.276645   10035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:33.277764   10035 out.go:298] Setting JSON to false
	I0617 04:52:33.293703   10035 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4923,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:33.293789   10035 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:33.298660   10035 out.go:177] * [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:33.305713   10035 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:33.305751   10035 notify.go:220] Checking for updates...
	I0617 04:52:33.315701   10035 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:33.318619   10035 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:33.321643   10035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:33.328706   10035 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:33.337667   10035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:33.344139   10035 config.go:182] Loaded profile config "no-preload-828000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:33.344401   10035 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:33.348716   10035 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:52:33.357676   10035 start.go:297] selected driver: qemu2
	I0617 04:52:33.357684   10035 start.go:901] validating driver "qemu2" against &{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:33.357760   10035 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:33.360186   10035 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:33.360231   10035 cni.go:84] Creating CNI manager for ""
	I0617 04:52:33.360239   10035 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:33.360263   10035 start.go:340] cluster config:
	{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:33.364852   10035 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.371662   10035 out.go:177] * Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	I0617 04:52:33.374693   10035 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:33.374767   10035 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/no-preload-828000/config.json ...
	I0617 04:52:33.374785   10035 cache.go:107] acquiring lock: {Name:mk659eb9e8657f0d926428caab9cd1d5e2e37549 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.374800   10035 cache.go:107] acquiring lock: {Name:mka462cb12599337d82ab6da925ea3122d4f3fe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.374811   10035 cache.go:107] acquiring lock: {Name:mkd93a44cb5cf44da933f43038ca18763d6369b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.374859   10035 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0617 04:52:33.374868   10035 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.75µs
	I0617 04:52:33.374875   10035 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0617 04:52:33.374789   10035 cache.go:107] acquiring lock: {Name:mkb753e956c8bc4b98b6ef27b7587beb55d2e378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.374882   10035 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0617 04:52:33.374892   10035 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 82.334µs
	I0617 04:52:33.374898   10035 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0617 04:52:33.374896   10035 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0617 04:52:33.374904   10035 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 117.666µs
	I0617 04:52:33.374905   10035 cache.go:107] acquiring lock: {Name:mk26e395759155200bf58d1c2651943e2f1c2ab9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.374918   10035 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0617 04:52:33.374925   10035 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 144.958µs
	I0617 04:52:33.374929   10035 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0617 04:52:33.374909   10035 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0617 04:52:33.374949   10035 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0617 04:52:33.374946   10035 cache.go:107] acquiring lock: {Name:mk649656c858ad16a0a63bec3e75b357e7bcb9d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.374954   10035 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 49.625µs
	I0617 04:52:33.374915   10035 cache.go:107] acquiring lock: {Name:mk9619555ebe4be08621ceba1e5f86dba9db1fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.374959   10035 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0617 04:52:33.375002   10035 cache.go:107] acquiring lock: {Name:mkb9ae1daa20dbb04b5a86dca1294c22681e1cf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:33.375015   10035 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0617 04:52:33.375021   10035 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 129.584µs
	I0617 04:52:33.375048   10035 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0617 04:52:33.375044   10035 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0617 04:52:33.375068   10035 cache.go:115] /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0617 04:52:33.375074   10035 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 168.041µs
	I0617 04:52:33.375083   10035 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0617 04:52:33.375210   10035 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:33.375247   10035 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "no-preload-828000"
	I0617 04:52:33.375257   10035 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:33.375266   10035 fix.go:54] fixHost starting: 
	I0617 04:52:33.375410   10035 fix.go:112] recreateIfNeeded on no-preload-828000: state=Stopped err=<nil>
	W0617 04:52:33.375420   10035 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:33.383711   10035 out.go:177] * Restarting existing qemu2 VM for "no-preload-828000" ...
	I0617 04:52:33.383950   10035 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0617 04:52:33.387740   10035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:5d:85:83:f1:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:33.390236   10035 main.go:141] libmachine: STDOUT: 
	I0617 04:52:33.390304   10035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:33.390343   10035 fix.go:56] duration metric: took 15.075709ms for fixHost
	I0617 04:52:33.390349   10035 start.go:83] releasing machines lock for "no-preload-828000", held for 15.097ms
	W0617 04:52:33.390359   10035 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:33.390388   10035 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:33.390394   10035 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:34.268025   10035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0617 04:52:38.391994   10035 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:38.392360   10035 start.go:364] duration metric: took 289.375µs to acquireMachinesLock for "no-preload-828000"
	I0617 04:52:38.392487   10035 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:38.392515   10035 fix.go:54] fixHost starting: 
	I0617 04:52:38.393205   10035 fix.go:112] recreateIfNeeded on no-preload-828000: state=Stopped err=<nil>
	W0617 04:52:38.393231   10035 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:38.398807   10035 out.go:177] * Restarting existing qemu2 VM for "no-preload-828000" ...
	I0617 04:52:38.406837   10035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:5d:85:83:f1:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/no-preload-828000/disk.qcow2
	I0617 04:52:38.417382   10035 main.go:141] libmachine: STDOUT: 
	I0617 04:52:38.417461   10035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:38.417559   10035 fix.go:56] duration metric: took 25.04775ms for fixHost
	I0617 04:52:38.417582   10035 start.go:83] releasing machines lock for "no-preload-828000", held for 25.19575ms
	W0617 04:52:38.417807   10035 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:38.425676   10035 out.go:177] 
	W0617 04:52:38.429830   10035 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:38.429883   10035 out.go:239] * 
	* 
	W0617 04:52:38.432643   10035 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:38.438703   10035 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (64.921708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-769000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-769000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (7.284061042s)

                                                
                                                
-- stdout --
	* [embed-certs-769000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-769000" primary control-plane node in "embed-certs-769000" cluster
	* Restarting existing qemu2 VM for "embed-certs-769000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-769000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:34.648161   10054 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:34.648313   10054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:34.648316   10054 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:34.648319   10054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:34.648459   10054 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:34.649460   10054 out.go:298] Setting JSON to false
	I0617 04:52:34.665467   10054 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4924,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:34.665535   10054 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:34.669374   10054 out.go:177] * [embed-certs-769000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:34.680406   10054 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:34.676482   10054 notify.go:220] Checking for updates...
	I0617 04:52:34.687293   10054 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:34.690414   10054 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:34.693457   10054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:34.694731   10054 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:34.697435   10054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:34.700757   10054 config.go:182] Loaded profile config "embed-certs-769000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:34.701014   10054 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:34.705287   10054 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:52:34.712401   10054 start.go:297] selected driver: qemu2
	I0617 04:52:34.712406   10054 start.go:901] validating driver "qemu2" against &{Name:embed-certs-769000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:embed-certs-769000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:34.712489   10054 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:34.714812   10054 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:34.714860   10054 cni.go:84] Creating CNI manager for ""
	I0617 04:52:34.714867   10054 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:34.714890   10054 start.go:340] cluster config:
	{Name:embed-certs-769000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-769000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:34.719111   10054 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:34.727421   10054 out.go:177] * Starting "embed-certs-769000" primary control-plane node in "embed-certs-769000" cluster
	I0617 04:52:34.731439   10054 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:34.731453   10054 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:52:34.731462   10054 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:34.731525   10054 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:34.731531   10054 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:52:34.731599   10054 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/embed-certs-769000/config.json ...
	I0617 04:52:34.732067   10054 start.go:360] acquireMachinesLock for embed-certs-769000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:34.732102   10054 start.go:364] duration metric: took 29.042µs to acquireMachinesLock for "embed-certs-769000"
	I0617 04:52:34.732110   10054 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:34.732116   10054 fix.go:54] fixHost starting: 
	I0617 04:52:34.732234   10054 fix.go:112] recreateIfNeeded on embed-certs-769000: state=Stopped err=<nil>
	W0617 04:52:34.732242   10054 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:34.736381   10054 out.go:177] * Restarting existing qemu2 VM for "embed-certs-769000" ...
	I0617 04:52:34.744447   10054 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:3a:cb:f6:84:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:34.746389   10054 main.go:141] libmachine: STDOUT: 
	I0617 04:52:34.746408   10054 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:34.746438   10054 fix.go:56] duration metric: took 14.319542ms for fixHost
	I0617 04:52:34.746442   10054 start.go:83] releasing machines lock for "embed-certs-769000", held for 14.335542ms
	W0617 04:52:34.746451   10054 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:34.746481   10054 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:34.746486   10054 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:39.748503   10054 start.go:360] acquireMachinesLock for embed-certs-769000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:41.824683   10054 start.go:364] duration metric: took 2.076163917s to acquireMachinesLock for "embed-certs-769000"
	I0617 04:52:41.824773   10054 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:41.824793   10054 fix.go:54] fixHost starting: 
	I0617 04:52:41.825584   10054 fix.go:112] recreateIfNeeded on embed-certs-769000: state=Stopped err=<nil>
	W0617 04:52:41.825619   10054 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:41.836191   10054 out.go:177] * Restarting existing qemu2 VM for "embed-certs-769000" ...
	I0617 04:52:41.851518   10054 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:3a:cb:f6:84:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/embed-certs-769000/disk.qcow2
	I0617 04:52:41.861696   10054 main.go:141] libmachine: STDOUT: 
	I0617 04:52:41.861773   10054 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:41.861866   10054 fix.go:56] duration metric: took 37.069834ms for fixHost
	I0617 04:52:41.861886   10054 start.go:83] releasing machines lock for "embed-certs-769000", held for 37.165ms
	W0617 04:52:41.862150   10054 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-769000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-769000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:41.869100   10054 out.go:177] 
	W0617 04:52:41.873203   10054 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:41.873227   10054 out.go:239] * 
	* 
	W0617 04:52:41.875118   10054 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:41.886110   10054 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-769000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (61.969708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-828000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (30.884083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-828000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-828000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-828000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.443583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-828000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-828000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (29.45825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-828000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.986917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-828000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-828000 --alsologtostderr -v=1: exit status 83 (40.625375ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-828000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-828000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:38.703788   10077 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:38.703949   10077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:38.703952   10077 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:38.703954   10077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:38.704080   10077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:38.704289   10077 out.go:298] Setting JSON to false
	I0617 04:52:38.704295   10077 mustload.go:65] Loading cluster: no-preload-828000
	I0617 04:52:38.704511   10077 config.go:182] Loaded profile config "no-preload-828000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:38.709028   10077 out.go:177] * The control-plane node no-preload-828000 host is not running: state=Stopped
	I0617 04:52:38.713053   10077 out.go:177]   To start a cluster, run: "minikube start -p no-preload-828000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-828000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.102875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.604625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-252000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-252000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.908130917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-252000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-252000" primary control-plane node in "default-k8s-diff-port-252000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-252000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:39.388638   10112 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:39.388771   10112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:39.388775   10112 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:39.388777   10112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:39.388927   10112 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:39.390076   10112 out.go:298] Setting JSON to false
	I0617 04:52:39.407596   10112 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4929,"bootTime":1718620230,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:39.407676   10112 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:39.412345   10112 out.go:177] * [default-k8s-diff-port-252000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:39.420330   10112 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:39.420369   10112 notify.go:220] Checking for updates...
	I0617 04:52:39.426257   10112 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:39.430310   10112 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:39.433240   10112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:39.436284   10112 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:39.439260   10112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:39.440979   10112 config.go:182] Loaded profile config "embed-certs-769000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:39.441038   10112 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:39.441103   10112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:39.445268   10112 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:52:39.452137   10112 start.go:297] selected driver: qemu2
	I0617 04:52:39.452143   10112 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:52:39.452164   10112 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:39.454322   10112 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:52:39.457307   10112 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:52:39.461353   10112 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:39.461393   10112 cni.go:84] Creating CNI manager for ""
	I0617 04:52:39.461402   10112 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:39.461406   10112 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:52:39.461446   10112 start.go:340] cluster config:
	{Name:default-k8s-diff-port-252000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:39.466003   10112 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:39.474290   10112 out.go:177] * Starting "default-k8s-diff-port-252000" primary control-plane node in "default-k8s-diff-port-252000" cluster
	I0617 04:52:39.478250   10112 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:39.478263   10112 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:52:39.478271   10112 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:39.478325   10112 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:39.478330   10112 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:52:39.478387   10112 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/default-k8s-diff-port-252000/config.json ...
	I0617 04:52:39.478397   10112 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/default-k8s-diff-port-252000/config.json: {Name:mk52a325d96e70365dc34f108a8300f6cbbed229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:52:39.478638   10112 start.go:360] acquireMachinesLock for default-k8s-diff-port-252000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:39.478674   10112 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "default-k8s-diff-port-252000"
	I0617 04:52:39.478685   10112 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:39.478724   10112 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:39.487151   10112 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:39.504682   10112 start.go:159] libmachine.API.Create for "default-k8s-diff-port-252000" (driver="qemu2")
	I0617 04:52:39.504712   10112 client.go:168] LocalClient.Create starting
	I0617 04:52:39.504769   10112 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:39.504800   10112 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:39.504813   10112 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:39.504857   10112 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:39.504879   10112 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:39.504890   10112 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:39.505298   10112 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:39.668935   10112 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:39.796560   10112 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:39.796569   10112 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:39.796740   10112 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:39.809406   10112 main.go:141] libmachine: STDOUT: 
	I0617 04:52:39.809424   10112 main.go:141] libmachine: STDERR: 
	I0617 04:52:39.809478   10112 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2 +20000M
	I0617 04:52:39.820391   10112 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:39.820406   10112 main.go:141] libmachine: STDERR: 
	I0617 04:52:39.820419   10112 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:39.820425   10112 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:39.820458   10112 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:b7:d7:52:29:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:39.822186   10112 main.go:141] libmachine: STDOUT: 
	I0617 04:52:39.822203   10112 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:39.822223   10112 client.go:171] duration metric: took 317.509167ms to LocalClient.Create
	I0617 04:52:41.824399   10112 start.go:128] duration metric: took 2.345676166s to createHost
	I0617 04:52:41.824501   10112 start.go:83] releasing machines lock for "default-k8s-diff-port-252000", held for 2.345783125s
	W0617 04:52:41.824573   10112 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:41.847197   10112 out.go:177] * Deleting "default-k8s-diff-port-252000" in qemu2 ...
	W0617 04:52:41.906770   10112 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:41.906799   10112 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:46.908985   10112 start.go:360] acquireMachinesLock for default-k8s-diff-port-252000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:46.909409   10112 start.go:364] duration metric: took 294.5µs to acquireMachinesLock for "default-k8s-diff-port-252000"
	I0617 04:52:46.909561   10112 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:46.909908   10112 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:46.919520   10112 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:46.968511   10112 start.go:159] libmachine.API.Create for "default-k8s-diff-port-252000" (driver="qemu2")
	I0617 04:52:46.968563   10112 client.go:168] LocalClient.Create starting
	I0617 04:52:46.968682   10112 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:46.968760   10112 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:46.968775   10112 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:46.968834   10112 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:46.968880   10112 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:46.968895   10112 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:46.969747   10112 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:47.141571   10112 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:47.195342   10112 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:47.195350   10112 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:47.195527   10112 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:47.208090   10112 main.go:141] libmachine: STDOUT: 
	I0617 04:52:47.208112   10112 main.go:141] libmachine: STDERR: 
	I0617 04:52:47.208176   10112 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2 +20000M
	I0617 04:52:47.219058   10112 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:47.219075   10112 main.go:141] libmachine: STDERR: 
	I0617 04:52:47.219097   10112 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:47.219102   10112 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:47.219149   10112 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b8:18:58:2d:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:47.220907   10112 main.go:141] libmachine: STDOUT: 
	I0617 04:52:47.220920   10112 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:47.220934   10112 client.go:171] duration metric: took 252.367792ms to LocalClient.Create
	I0617 04:52:49.223096   10112 start.go:128] duration metric: took 2.31318425s to createHost
	I0617 04:52:49.223150   10112 start.go:83] releasing machines lock for "default-k8s-diff-port-252000", held for 2.313741125s
	W0617 04:52:49.223604   10112 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:49.234254   10112 out.go:177] 
	W0617 04:52:49.242402   10112 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:49.242426   10112 out.go:239] * 
	* 
	W0617 04:52:49.245258   10112 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:49.256244   10112 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-252000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (64.393208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-769000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (31.209666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-769000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-769000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-769000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.0985ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-769000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-769000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (29.571875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-769000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (28.495583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-769000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-769000 --alsologtostderr -v=1: exit status 83 (49.102875ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-769000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-769000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:42.150488   10134 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:42.150874   10134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:42.150879   10134 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:42.150881   10134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:42.151087   10134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:42.151358   10134 out.go:298] Setting JSON to false
	I0617 04:52:42.151370   10134 mustload.go:65] Loading cluster: embed-certs-769000
	I0617 04:52:42.151669   10134 config.go:182] Loaded profile config "embed-certs-769000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:42.156601   10134 out.go:177] * The control-plane node embed-certs-769000 host is not running: state=Stopped
	I0617 04:52:42.166763   10134 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-769000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-769000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (29.436916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (29.531542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-774000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-774000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.856926167s)

                                                
                                                
-- stdout --
	* [newest-cni-774000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-774000" primary control-plane node in "newest-cni-774000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-774000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:42.627613   10157 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:42.627742   10157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:42.627745   10157 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:42.627747   10157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:42.627876   10157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:42.629069   10157 out.go:298] Setting JSON to false
	I0617 04:52:42.645343   10157 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4932,"bootTime":1718620230,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:42.645406   10157 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:42.649593   10157 out.go:177] * [newest-cni-774000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:42.656641   10157 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:42.660554   10157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:42.656732   10157 notify.go:220] Checking for updates...
	I0617 04:52:42.668547   10157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:42.671540   10157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:42.674547   10157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:42.678603   10157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:42.681898   10157 config.go:182] Loaded profile config "default-k8s-diff-port-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:42.681960   10157 config.go:182] Loaded profile config "multinode-812000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:42.682022   10157 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:42.686602   10157 out.go:177] * Using the qemu2 driver based on user configuration
	I0617 04:52:42.693542   10157 start.go:297] selected driver: qemu2
	I0617 04:52:42.693549   10157 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:52:42.693556   10157 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:42.695778   10157 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0617 04:52:42.695803   10157 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0617 04:52:42.700588   10157 out.go:177] * Automatically selected the socket_vmnet network
	I0617 04:52:42.707640   10157 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0617 04:52:42.707660   10157 cni.go:84] Creating CNI manager for ""
	I0617 04:52:42.707668   10157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:42.707681   10157 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:52:42.707728   10157 start.go:340] cluster config:
	{Name:newest-cni-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:42.712207   10157 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:42.719421   10157 out.go:177] * Starting "newest-cni-774000" primary control-plane node in "newest-cni-774000" cluster
	I0617 04:52:42.723597   10157 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:42.723612   10157 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:52:42.723622   10157 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:42.723698   10157 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:42.723704   10157 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:52:42.723772   10157 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/newest-cni-774000/config.json ...
	I0617 04:52:42.723784   10157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/newest-cni-774000/config.json: {Name:mkd6964c3b00d67a1d3e889cef9bdc4cb18c97ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:52:42.724010   10157 start.go:360] acquireMachinesLock for newest-cni-774000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:42.724050   10157 start.go:364] duration metric: took 34.167µs to acquireMachinesLock for "newest-cni-774000"
	I0617 04:52:42.724062   10157 start.go:93] Provisioning new machine with config: &{Name:newest-cni-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:42.724094   10157 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:42.731517   10157 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:42.750289   10157 start.go:159] libmachine.API.Create for "newest-cni-774000" (driver="qemu2")
	I0617 04:52:42.750315   10157 client.go:168] LocalClient.Create starting
	I0617 04:52:42.750392   10157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:42.750423   10157 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:42.750435   10157 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:42.750488   10157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:42.750511   10157 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:42.750517   10157 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:42.750902   10157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:42.906804   10157 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:42.987780   10157 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:42.987785   10157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:42.987951   10157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:43.000561   10157 main.go:141] libmachine: STDOUT: 
	I0617 04:52:43.000581   10157 main.go:141] libmachine: STDERR: 
	I0617 04:52:43.000643   10157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2 +20000M
	I0617 04:52:43.011692   10157 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:43.011709   10157 main.go:141] libmachine: STDERR: 
	I0617 04:52:43.011730   10157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:43.011735   10157 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:43.011772   10157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:30:cb:79:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:43.013548   10157 main.go:141] libmachine: STDOUT: 
	I0617 04:52:43.013562   10157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:43.013583   10157 client.go:171] duration metric: took 263.258542ms to LocalClient.Create
	I0617 04:52:45.015754   10157 start.go:128] duration metric: took 2.291661834s to createHost
	I0617 04:52:45.015805   10157 start.go:83] releasing machines lock for "newest-cni-774000", held for 2.291768167s
	W0617 04:52:45.015870   10157 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:45.031089   10157 out.go:177] * Deleting "newest-cni-774000" in qemu2 ...
	W0617 04:52:45.060480   10157 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:45.060526   10157 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:50.062636   10157 start.go:360] acquireMachinesLock for newest-cni-774000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:50.063060   10157 start.go:364] duration metric: took 356.25µs to acquireMachinesLock for "newest-cni-774000"
	I0617 04:52:50.063221   10157 start.go:93] Provisioning new machine with config: &{Name:newest-cni-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0617 04:52:50.063498   10157 start.go:125] createHost starting for "" (driver="qemu2")
	I0617 04:52:50.069346   10157 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0617 04:52:50.118410   10157 start.go:159] libmachine.API.Create for "newest-cni-774000" (driver="qemu2")
	I0617 04:52:50.118463   10157 client.go:168] LocalClient.Create starting
	I0617 04:52:50.118554   10157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/ca.pem
	I0617 04:52:50.118600   10157 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:50.118625   10157 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:50.118687   10157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19087-6045/.minikube/certs/cert.pem
	I0617 04:52:50.118722   10157 main.go:141] libmachine: Decoding PEM data...
	I0617 04:52:50.118733   10157 main.go:141] libmachine: Parsing certificate...
	I0617 04:52:50.119264   10157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso...
	I0617 04:52:50.284269   10157 main.go:141] libmachine: Creating SSH key...
	I0617 04:52:50.385421   10157 main.go:141] libmachine: Creating Disk image...
	I0617 04:52:50.385430   10157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0617 04:52:50.385630   10157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2.raw /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:50.397724   10157 main.go:141] libmachine: STDOUT: 
	I0617 04:52:50.397754   10157 main.go:141] libmachine: STDERR: 
	I0617 04:52:50.397808   10157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2 +20000M
	I0617 04:52:50.408630   10157 main.go:141] libmachine: STDOUT: Image resized.
	
	I0617 04:52:50.408649   10157 main.go:141] libmachine: STDERR: 
	I0617 04:52:50.408660   10157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:50.408670   10157 main.go:141] libmachine: Starting QEMU VM...
	I0617 04:52:50.408701   10157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4b:83:18:ed:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:50.410304   10157 main.go:141] libmachine: STDOUT: 
	I0617 04:52:50.410319   10157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:50.410332   10157 client.go:171] duration metric: took 291.867417ms to LocalClient.Create
	I0617 04:52:52.412637   10157 start.go:128] duration metric: took 2.34907675s to createHost
	I0617 04:52:52.412740   10157 start.go:83] releasing machines lock for "newest-cni-774000", held for 2.349684667s
	W0617 04:52:52.413088   10157 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:52.427701   10157 out.go:177] 
	W0617 04:52:52.433687   10157 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:52.433763   10157 out.go:239] * 
	* 
	W0617 04:52:52.436555   10157 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:52.444583   10157 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-774000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000: exit status 7 (60.68525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-774000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-252000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-252000 create -f testdata/busybox.yaml: exit status 1 (30.021917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-252000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-252000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (29.476666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (29.2115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-252000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-252000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-252000 describe deploy/metrics-server -n kube-system: exit status 1 (27.142917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-252000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-252000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (29.189833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-252000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-252000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (6.104932083s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-252000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-252000" primary control-plane node in "default-k8s-diff-port-252000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-252000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-252000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:51.431775   10206 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:51.431898   10206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:51.431901   10206 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:51.431903   10206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:51.432042   10206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:51.433043   10206 out.go:298] Setting JSON to false
	I0617 04:52:51.450461   10206 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4941,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:51.450528   10206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:51.455193   10206 out.go:177] * [default-k8s-diff-port-252000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:51.462089   10206 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:51.466106   10206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:51.462120   10206 notify.go:220] Checking for updates...
	I0617 04:52:51.473065   10206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:51.476119   10206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:51.479012   10206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:51.482068   10206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:51.485389   10206 config.go:182] Loaded profile config "default-k8s-diff-port-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:51.485667   10206 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:51.489025   10206 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:52:51.496069   10206 start.go:297] selected driver: qemu2
	I0617 04:52:51.496075   10206 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-252000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:51.496176   10206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:51.498545   10206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 04:52:51.498583   10206 cni.go:84] Creating CNI manager for ""
	I0617 04:52:51.498590   10206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:51.498622   10206 start.go:340] cluster config:
	{Name:default-k8s-diff-port-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-252000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:51.502993   10206 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:51.511065   10206 out.go:177] * Starting "default-k8s-diff-port-252000" primary control-plane node in "default-k8s-diff-port-252000" cluster
	I0617 04:52:51.514007   10206 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:51.514021   10206 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:52:51.514031   10206 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:51.514093   10206 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:51.514099   10206 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:52:51.514179   10206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/default-k8s-diff-port-252000/config.json ...
	I0617 04:52:51.514689   10206 start.go:360] acquireMachinesLock for default-k8s-diff-port-252000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:52.412965   10206 start.go:364] duration metric: took 898.253167ms to acquireMachinesLock for "default-k8s-diff-port-252000"
	I0617 04:52:52.413127   10206 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:52.413162   10206 fix.go:54] fixHost starting: 
	I0617 04:52:52.413826   10206 fix.go:112] recreateIfNeeded on default-k8s-diff-port-252000: state=Stopped err=<nil>
	W0617 04:52:52.413874   10206 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:52.430560   10206 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-252000" ...
	I0617 04:52:52.437803   10206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b8:18:58:2d:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:52.448631   10206 main.go:141] libmachine: STDOUT: 
	I0617 04:52:52.448716   10206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:52.448886   10206 fix.go:56] duration metric: took 35.722333ms for fixHost
	I0617 04:52:52.448905   10206 start.go:83] releasing machines lock for "default-k8s-diff-port-252000", held for 35.901667ms
	W0617 04:52:52.448943   10206 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:52.449094   10206 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:52.449110   10206 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:57.451315   10206 start.go:360] acquireMachinesLock for default-k8s-diff-port-252000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:57.451868   10206 start.go:364] duration metric: took 352.041µs to acquireMachinesLock for "default-k8s-diff-port-252000"
	I0617 04:52:57.452010   10206 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:57.452031   10206 fix.go:54] fixHost starting: 
	I0617 04:52:57.452896   10206 fix.go:112] recreateIfNeeded on default-k8s-diff-port-252000: state=Stopped err=<nil>
	W0617 04:52:57.452927   10206 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:57.462551   10206 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-252000" ...
	I0617 04:52:57.465643   10206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b8:18:58:2d:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/default-k8s-diff-port-252000/disk.qcow2
	I0617 04:52:57.475111   10206 main.go:141] libmachine: STDOUT: 
	I0617 04:52:57.475195   10206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:57.475296   10206 fix.go:56] duration metric: took 23.26625ms for fixHost
	I0617 04:52:57.475318   10206 start.go:83] releasing machines lock for "default-k8s-diff-port-252000", held for 23.421917ms
	W0617 04:52:57.475550   10206 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-252000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-252000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:57.480749   10206 out.go:177] 
	W0617 04:52:57.484648   10206 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:57.484689   10206 out.go:239] * 
	* 
	W0617 04:52:57.488011   10206 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:57.495535   10206 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-252000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (65.4795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-774000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-774000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.182505959s)

                                                
                                                
-- stdout --
	* [newest-cni-774000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-774000" primary control-plane node in "newest-cni-774000" cluster
	* Restarting existing qemu2 VM for "newest-cni-774000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-774000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:54.545160   10231 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:54.545293   10231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:54.545297   10231 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:54.545299   10231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:54.545415   10231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:54.546462   10231 out.go:298] Setting JSON to false
	I0617 04:52:54.562764   10231 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4944,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:52:54.562844   10231 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:52:54.567721   10231 out.go:177] * [newest-cni-774000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:52:54.574659   10231 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:52:54.574695   10231 notify.go:220] Checking for updates...
	I0617 04:52:54.578694   10231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:52:54.581611   10231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:52:54.585644   10231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:52:54.588686   10231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:52:54.591551   10231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:52:54.594961   10231 config.go:182] Loaded profile config "newest-cni-774000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:54.595212   10231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:52:54.598630   10231 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:52:54.605679   10231 start.go:297] selected driver: qemu2
	I0617 04:52:54.605684   10231 start.go:901] validating driver "qemu2" against &{Name:newest-cni-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:newest-cni-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:54.605750   10231 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:52:54.607917   10231 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0617 04:52:54.607966   10231 cni.go:84] Creating CNI manager for ""
	I0617 04:52:54.607973   10231 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:52:54.607998   10231 start.go:340] cluster config:
	{Name:newest-cni-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-774000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:52:54.612318   10231 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:52:54.620626   10231 out.go:177] * Starting "newest-cni-774000" primary control-plane node in "newest-cni-774000" cluster
	I0617 04:52:54.625634   10231 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:52:54.625646   10231 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:52:54.625652   10231 cache.go:56] Caching tarball of preloaded images
	I0617 04:52:54.625700   10231 preload.go:173] Found /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 04:52:54.625706   10231 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:52:54.625773   10231 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/newest-cni-774000/config.json ...
	I0617 04:52:54.626272   10231 start.go:360] acquireMachinesLock for newest-cni-774000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:54.626303   10231 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "newest-cni-774000"
	I0617 04:52:54.626312   10231 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:54.626318   10231 fix.go:54] fixHost starting: 
	I0617 04:52:54.626431   10231 fix.go:112] recreateIfNeeded on newest-cni-774000: state=Stopped err=<nil>
	W0617 04:52:54.626443   10231 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:54.630680   10231 out.go:177] * Restarting existing qemu2 VM for "newest-cni-774000" ...
	I0617 04:52:54.637660   10231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4b:83:18:ed:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:54.639644   10231 main.go:141] libmachine: STDOUT: 
	I0617 04:52:54.639663   10231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:54.639703   10231 fix.go:56] duration metric: took 13.385125ms for fixHost
	I0617 04:52:54.639708   10231 start.go:83] releasing machines lock for "newest-cni-774000", held for 13.400042ms
	W0617 04:52:54.639715   10231 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:54.639745   10231 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:54.639750   10231 start.go:728] Will try again in 5 seconds ...
	I0617 04:52:59.641933   10231 start.go:360] acquireMachinesLock for newest-cni-774000: {Name:mk4412ee1df601c627fdfb05186f42550bae1da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0617 04:52:59.642369   10231 start.go:364] duration metric: took 336.583µs to acquireMachinesLock for "newest-cni-774000"
	I0617 04:52:59.642509   10231 start.go:96] Skipping create...Using existing machine configuration
	I0617 04:52:59.642532   10231 fix.go:54] fixHost starting: 
	I0617 04:52:59.643300   10231 fix.go:112] recreateIfNeeded on newest-cni-774000: state=Stopped err=<nil>
	W0617 04:52:59.643329   10231 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 04:52:59.647766   10231 out.go:177] * Restarting existing qemu2 VM for "newest-cni-774000" ...
	I0617 04:52:59.654960   10231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4b:83:18:ed:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19087-6045/.minikube/machines/newest-cni-774000/disk.qcow2
	I0617 04:52:59.664665   10231 main.go:141] libmachine: STDOUT: 
	I0617 04:52:59.664743   10231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0617 04:52:59.664845   10231 fix.go:56] duration metric: took 22.313708ms for fixHost
	I0617 04:52:59.664865   10231 start.go:83] releasing machines lock for "newest-cni-774000", held for 22.473459ms
	W0617 04:52:59.665053   10231 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-774000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-774000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0617 04:52:59.673717   10231 out.go:177] 
	W0617 04:52:59.676867   10231 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0617 04:52:59.676915   10231 out.go:239] * 
	* 
	W0617 04:52:59.679751   10231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:52:59.686738   10231 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-774000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000: exit status 7 (71.523083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-774000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-252000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (31.389333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-252000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-252000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-252000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.947459ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-252000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-252000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (28.771959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-252000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (28.756417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-252000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-252000 --alsologtostderr -v=1: exit status 83 (41.511583ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-252000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-252000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:57.761128   10250 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:57.761287   10250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:57.761291   10250 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:57.761293   10250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:57.761438   10250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:57.761675   10250 out.go:298] Setting JSON to false
	I0617 04:52:57.761682   10250 mustload.go:65] Loading cluster: default-k8s-diff-port-252000
	I0617 04:52:57.761868   10250 config.go:182] Loaded profile config "default-k8s-diff-port-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:57.765169   10250 out.go:177] * The control-plane node default-k8s-diff-port-252000 host is not running: state=Stopped
	I0617 04:52:57.769203   10250 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-252000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-252000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (28.333666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (28.429917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-774000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000: exit status 7 (30.566959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-774000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-774000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-774000 --alsologtostderr -v=1: exit status 83 (40.907333ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-774000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-774000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:52:59.875578   10284 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:52:59.875743   10284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:59.875746   10284 out.go:304] Setting ErrFile to fd 2...
	I0617 04:52:59.875748   10284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:52:59.875877   10284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:52:59.876101   10284 out.go:298] Setting JSON to false
	I0617 04:52:59.876108   10284 mustload.go:65] Loading cluster: newest-cni-774000
	I0617 04:52:59.876307   10284 config.go:182] Loaded profile config "newest-cni-774000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:52:59.880215   10284 out.go:177] * The control-plane node newest-cni-774000 host is not running: state=Stopped
	I0617 04:52:59.884223   10284 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-774000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-774000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000: exit status 7 (30.144125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-774000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000: exit status 7 (29.513209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-774000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.30.1/json-events 10.27
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.08
18 TestDownloadOnly/v1.30.1/DeleteAll 0.23
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.03
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.02
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
55 TestFunctional/serial/CacheCmd/cache/add_local 1.18
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.29
72 TestFunctional/parallel/InternationalLanguage 0.1
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.63
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 2.23
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.18
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.96
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.35
258 TestNoKubernetes/serial/Stop 3.13
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.64
275 TestStartStop/group/old-k8s-version/serial/Stop 3.31
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
288 TestStartStop/group/no-preload/serial/Stop 2.94
291 TestStartStop/group/embed-certs/serial/Stop 1.94
292 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.74
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
315 TestStartStop/group/newest-cni/serial/Stop 1.81
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-246000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-246000: exit status 85 (95.903834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |          |
	|         | -p download-only-246000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 04:26:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 04:26:13.874124    6542 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:26:13.874272    6542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:13.874276    6542 out.go:304] Setting ErrFile to fd 2...
	I0617 04:26:13.874278    6542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:13.874415    6542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	W0617 04:26:13.874514    6542 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19087-6045/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19087-6045/.minikube/config/config.json: no such file or directory
	I0617 04:26:13.875816    6542 out.go:298] Setting JSON to true
	I0617 04:26:13.893610    6542 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3343,"bootTime":1718620230,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:26:13.893671    6542 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:26:13.897732    6542 out.go:97] [download-only-246000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:26:13.901729    6542 out.go:169] MINIKUBE_LOCATION=19087
	I0617 04:26:13.897869    6542 notify.go:220] Checking for updates...
	W0617 04:26:13.897906    6542 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball: no such file or directory
	I0617 04:26:13.910647    6542 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:26:13.914821    6542 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:26:13.920698    6542 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:26:13.924713    6542 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	W0617 04:26:13.929699    6542 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0617 04:26:13.929899    6542 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:26:13.932713    6542 out.go:97] Using the qemu2 driver based on user configuration
	I0617 04:26:13.932731    6542 start.go:297] selected driver: qemu2
	I0617 04:26:13.932734    6542 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:26:13.932801    6542 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:26:13.935736    6542 out.go:169] Automatically selected the socket_vmnet network
	I0617 04:26:13.941017    6542 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0617 04:26:13.941131    6542 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 04:26:13.941159    6542 cni.go:84] Creating CNI manager for ""
	I0617 04:26:13.941177    6542 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0617 04:26:13.941229    6542 start.go:340] cluster config:
	{Name:download-only-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:26:13.946136    6542 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:26:13.950780    6542 out.go:97] Downloading VM boot image ...
	I0617 04:26:13.950817    6542 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/iso/arm64/minikube-v1.33.1-1718047936-19044-arm64.iso
	I0617 04:26:22.289295    6542 out.go:97] Starting "download-only-246000" primary control-plane node in "download-only-246000" cluster
	I0617 04:26:22.289321    6542 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:26:22.399226    6542 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0617 04:26:22.399270    6542 cache.go:56] Caching tarball of preloaded images
	I0617 04:26:22.400216    6542 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:26:22.404453    6542 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0617 04:26:22.404465    6542 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:22.632817    6542 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0617 04:26:32.948289    6542 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:32.948476    6542 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:33.644647    6542 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0617 04:26:33.644844    6542 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/download-only-246000/config.json ...
	I0617 04:26:33.644863    6542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/download-only-246000/config.json: {Name:mk162b574b25804148683088f31df764079244a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:26:33.645930    6542 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0617 04:26:33.646127    6542 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0617 04:26:34.123083    6542 out.go:169] 
	W0617 04:26:34.128610    6542 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104845900 0x104845900 0x104845900 0x104845900 0x104845900 0x104845900 0x104845900] Decompressors:map[bz2:0x140005d3c50 gz:0x140005d3c58 tar:0x140005d3bf0 tar.bz2:0x140005d3c10 tar.gz:0x140005d3c20 tar.xz:0x140005d3c30 tar.zst:0x140005d3c40 tbz2:0x140005d3c10 tgz:0x140005d3c20 txz:0x140005d3c30 tzst:0x140005d3c40 xz:0x140005d3c60 zip:0x140005d3c70 zst:0x140005d3c68] Getters:map[file:0x14000063520 http:0x1400081e190 https:0x1400081e1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0617 04:26:34.128636    6542 out_reason.go:110] 
	W0617 04:26:34.135084    6542 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 04:26:34.139077    6542 out.go:169] 
	
	
	* The control-plane node download-only-246000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-246000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-246000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (10.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-763000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-763000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 : (10.27097975s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (10.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-763000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-763000: exit status 85 (77.173375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | -p download-only-246000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| delete  | -p download-only-246000        | download-only-246000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT | 17 Jun 24 04:26 PDT |
	| start   | -o=json --download-only        | download-only-763000 | jenkins | v1.33.1 | 17 Jun 24 04:26 PDT |                     |
	|         | -p download-only-763000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 04:26:34
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 04:26:34.801086    6579 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:26:34.801218    6579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:34.801222    6579 out.go:304] Setting ErrFile to fd 2...
	I0617 04:26:34.801224    6579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:26:34.801344    6579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:26:34.802367    6579 out.go:298] Setting JSON to true
	I0617 04:26:34.818496    6579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3364,"bootTime":1718620230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:26:34.818554    6579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:26:34.821982    6579 out.go:97] [download-only-763000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:26:34.825978    6579 out.go:169] MINIKUBE_LOCATION=19087
	I0617 04:26:34.822095    6579 notify.go:220] Checking for updates...
	I0617 04:26:34.832960    6579 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:26:34.835958    6579 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:26:34.839019    6579 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:26:34.841951    6579 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	W0617 04:26:34.847940    6579 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0617 04:26:34.848156    6579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:26:34.850907    6579 out.go:97] Using the qemu2 driver based on user configuration
	I0617 04:26:34.850914    6579 start.go:297] selected driver: qemu2
	I0617 04:26:34.850917    6579 start.go:901] validating driver "qemu2" against <nil>
	I0617 04:26:34.850951    6579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 04:26:34.853933    6579 out.go:169] Automatically selected the socket_vmnet network
	I0617 04:26:34.859069    6579 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0617 04:26:34.859159    6579 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 04:26:34.859180    6579 cni.go:84] Creating CNI manager for ""
	I0617 04:26:34.859188    6579 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0617 04:26:34.859193    6579 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0617 04:26:34.859231    6579 start.go:340] cluster config:
	{Name:download-only-763000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:26:34.863544    6579 iso.go:125] acquiring lock: {Name:mk9ba180fc26388ac7e7eaa1003639a865d8e2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 04:26:34.864795    6579 out.go:97] Starting "download-only-763000" primary control-plane node in "download-only-763000" cluster
	I0617 04:26:34.864800    6579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:26:35.076867    6579 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:26:35.076955    6579 cache.go:56] Caching tarball of preloaded images
	I0617 04:26:35.077823    6579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:26:35.083458    6579 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0617 04:26:35.083495    6579 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:35.298590    6579 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0617 04:26:42.945651    6579 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:42.945825    6579 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0617 04:26:43.488298    6579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0617 04:26:43.488494    6579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/download-only-763000/config.json ...
	I0617 04:26:43.488511    6579 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19087-6045/.minikube/profiles/download-only-763000/config.json: {Name:mkcdbb45ba6dc7b38cad9220bdc5a4a782ac4553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 04:26:43.488762    6579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0617 04:26:43.488883    6579 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19087-6045/.minikube/cache/darwin/arm64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-763000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-763000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-763000
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-001000 --alsologtostderr --binary-mirror http://127.0.0.1:51054 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-001000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-001000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-585000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-585000: exit status 85 (58.222209ms)

                                                
                                                
-- stdout --
	* Profile "addons-585000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-585000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-585000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-585000: exit status 85 (61.994666ms)

                                                
                                                
-- stdout --
	* Profile "addons-585000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-585000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.03s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status: exit status 7 (30.881208ms)

                                                
                                                
-- stdout --
	nospam-533000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status: exit status 7 (30.511041ms)

                                                
                                                
-- stdout --
	nospam-533000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status: exit status 7 (29.577667ms)

                                                
                                                
-- stdout --
	nospam-533000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause: exit status 83 (37.626917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-533000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause: exit status 83 (45.362333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-533000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause: exit status 83 (38.818625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-533000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause: exit status 83 (39.634917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-533000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause: exit status 83 (39.019583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-533000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause: exit status 83 (39.652458ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-533000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 stop: (2.131653208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 stop: (3.212890459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-533000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-533000 stop: (3.673636917s)
--- PASS: TestErrorSpam/stop (9.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19087-6045/.minikube/files/etc/test/nested/copy/6540/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.1: (1.182458875s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.3: (1.067012791s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local267502066/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add minikube-local-cache-test:functional-296000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache delete minikube-local-cache-test:functional-296000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-296000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 config get cpus: exit status 14 (29.716792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 config get cpus: exit status 14 (32.376125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (164.281125ms)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:28:31.644120    7195 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:28:31.644290    7195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:31.644295    7195 out.go:304] Setting ErrFile to fd 2...
	I0617 04:28:31.644299    7195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:31.644493    7195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:28:31.645952    7195 out.go:298] Setting JSON to false
	I0617 04:28:31.667492    7195 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3481,"bootTime":1718620230,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:28:31.667564    7195 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:28:31.674484    7195 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0617 04:28:31.681491    7195 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:28:31.685467    7195 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:28:31.681525    7195 notify.go:220] Checking for updates...
	I0617 04:28:31.692462    7195 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:28:31.696461    7195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:28:31.699472    7195 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:28:31.702472    7195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:28:31.706875    7195 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:28:31.707241    7195 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:28:31.711467    7195 out.go:177] * Using the qemu2 driver based on existing profile
	I0617 04:28:31.718533    7195 start.go:297] selected driver: qemu2
	I0617 04:28:31.718539    7195 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:28:31.718608    7195 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:28:31.725469    7195 out.go:177] 
	W0617 04:28:31.729475    7195 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0617 04:28:31.732469    7195 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (101.826625ms)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 04:28:31.891110    7206 out.go:291] Setting OutFile to fd 1 ...
	I0617 04:28:31.891223    7206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:31.891226    7206 out.go:304] Setting ErrFile to fd 2...
	I0617 04:28:31.891228    7206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 04:28:31.891349    7206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19087-6045/.minikube/bin
	I0617 04:28:31.892659    7206 out.go:298] Setting JSON to false
	I0617 04:28:31.909244    7206 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3481,"bootTime":1718620230,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0617 04:28:31.909322    7206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0617 04:28:31.914472    7206 out.go:177] * [functional-296000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0617 04:28:31.918487    7206 out.go:177]   - MINIKUBE_LOCATION=19087
	I0617 04:28:31.922458    7206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	I0617 04:28:31.918542    7206 notify.go:220] Checking for updates...
	I0617 04:28:31.925343    7206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0617 04:28:31.928498    7206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 04:28:31.931482    7206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	I0617 04:28:31.932738    7206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 04:28:31.935706    7206 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0617 04:28:31.935975    7206 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 04:28:31.940476    7206 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0617 04:28:31.945426    7206 start.go:297] selected driver: qemu2
	I0617 04:28:31.945431    7206 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 04:28:31.945482    7206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 04:28:31.951510    7206 out.go:177] 
	W0617 04:28:31.955370    7206 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0617 04:28:31.959483    7206 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.194183042s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-296000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image rm gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-296000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image save --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-296000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "70.735625ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.980666ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "69.887041ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.705334ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.0123755s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-296000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-296000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-296000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-311000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-311000 --output=json --user=testUser: (3.175504167s)
--- PASS: TestJSONOutput/stop/Command (3.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-090000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-090000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.959875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"564bdc73-8391-4968-9617-aded4eb71304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-090000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73b8baf4-3756-48ab-afbc-0c1591557478","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19087"}}
	{"specversion":"1.0","id":"63ad0fd9-31b3-433b-8650-16af2b9e42b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig"}}
	{"specversion":"1.0","id":"71698feb-2199-4282-b598-c8cbd3fa3e59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"dbba947f-de56-4804-8a51-38b1ff21cd98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"43b22250-5d91-46b7-b2e0-1ef89cc58be3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube"}}
	{"specversion":"1.0","id":"a92c31d1-c460-44e0-977e-28e336c07e3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a6ba033a-8241-4303-a9ff-c995b9946e5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-090000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-090000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-684000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.609667ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-684000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19087
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19087-6045/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19087-6045/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-684000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-684000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.570375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-684000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-684000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.690046833s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.662941333s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-684000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-684000: (3.128004833s)
--- PASS: TestNoKubernetes/serial/Stop (3.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-684000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-684000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.713792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-684000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-684000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-767000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-013000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-013000 --alsologtostderr -v=3: (3.306629042s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-013000 -n old-k8s-version-013000: exit status 7 (52.250625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-013000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-828000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-828000 --alsologtostderr -v=3: (2.942371542s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-769000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-769000 --alsologtostderr -v=3: (1.936952583s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (55.996292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-828000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-769000 -n embed-certs-769000: exit status 7 (55.787458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-769000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-252000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-252000 --alsologtostderr -v=3: (1.74430175s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-252000 -n default-k8s-diff-port-252000: exit status 7 (54.228208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-252000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-774000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-774000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-774000 --alsologtostderr -v=3: (1.814191625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-774000 -n newest-cni-774000: exit status 7 (54.829667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-774000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3604213826/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718623674232849000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3604213826/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718623674232849000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3604213826/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718623674232849000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3604213826/001/test-1718623674232849000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (53.479666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.997792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.127708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.576417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.585584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.159625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.598208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p": exit status 83 (46.681667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3604213826/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1822896555/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.819875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.6485ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.382792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.514541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.322833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.281667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.20125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p": exit status 83 (47.694125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1822896555/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup193040655/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup193040655/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup193040655/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (84.069042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (84.295791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (83.420667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (89.8185ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (83.753667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (84.969917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (85.619125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup193040655/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup193040655/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup193040655/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.31s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-696000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-696000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-696000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-696000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696000"

                                                
                                                
----------------------- debugLogs end: cilium-696000 [took: 2.183763708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-696000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-696000
--- SKIP: TestNetworkPlugins/group/cilium (2.41s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-914000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-914000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard