Test Report: QEMU_macOS 19360

                    
                      cd79d30fb13c14d30ca0dbfe151ef256c3a20136:2024-07-31:35589
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.26
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.05
36 TestAddons/Setup 10.25
37 TestCertOptions 10.04
38 TestCertExpiration 196.24
39 TestDockerFlags 10.07
40 TestForceSystemdFlag 11.51
41 TestForceSystemdEnv 10.09
47 TestErrorSpam/setup 9.93
56 TestFunctional/serial/StartWithProxy 9.92
58 TestFunctional/serial/SoftStart 5.25
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.86
72 TestFunctional/serial/ExtraConfig 5.27
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.12
86 TestFunctional/parallel/ServiceCmdConnect 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.28
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.04
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.04
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 98.66
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.27
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.64
150 TestMultiControlPlane/serial/StartCluster 9.84
151 TestMultiControlPlane/serial/DeployApp 96.28
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.07
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 47.71
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.43
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 1.91
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.89
174 TestJSONOutput/start/Command 9.99
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.06
206 TestMountStart/serial/StartWithMountFirst 10.19
209 TestMultiNode/serial/FreshStart2Nodes 9.96
210 TestMultiNode/serial/DeployApp2Nodes 105.96
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 51.17
218 TestMultiNode/serial/RestartKeepsNodes 9.19
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 1.9
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.19
226 TestPreload 10.03
228 TestScheduledStopUnix 9.93
229 TestSkaffold 12.14
232 TestRunningBinaryUpgrade 621.46
234 TestKubernetesUpgrade 17.21
248 TestStoppedBinaryUpgrade/Upgrade 585.55
258 TestPause/serial/Start 10.12
261 TestNoKubernetes/serial/StartWithK8s 9.81
262 TestNoKubernetes/serial/StartWithStopK8s 7.51
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.66
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.5
265 TestNoKubernetes/serial/Start 5.28
269 TestNoKubernetes/serial/StartNoArgs 5.35
271 TestNetworkPlugins/group/auto/Start 10.02
272 TestNetworkPlugins/group/calico/Start 9.87
273 TestNetworkPlugins/group/custom-flannel/Start 9.92
274 TestNetworkPlugins/group/false/Start 9.85
275 TestNetworkPlugins/group/kindnet/Start 9.81
276 TestNetworkPlugins/group/flannel/Start 9.8
277 TestNetworkPlugins/group/enable-default-cni/Start 9.89
278 TestNetworkPlugins/group/bridge/Start 9.79
279 TestNetworkPlugins/group/kubenet/Start 9.84
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.88
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.26
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 9.94
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/embed-certs/serial/SecondStart 5.24
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/embed-certs/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.97
316 TestStartStop/group/newest-cni/serial/FirstStart 9.85
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.38
326 TestStartStop/group/newest-cni/serial/SecondStart 5.25
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (19.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-537000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-537000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (19.253963541s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f1648cfc-5869-4bfe-b654-9ab5b13c64e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-537000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12757908-6d0c-4209-acbc-5e9b89c19a73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"23279599-c2c7-4794-8b14-10200ef2b86f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig"}}
	{"specversion":"1.0","id":"965908f2-4af1-455c-8784-1df6af3031bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"52b16141-c4c1-439a-a1b4-0339ccef6690","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"37ec653e-a65e-445d-9b62-7c590279382f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube"}}
	{"specversion":"1.0","id":"f542ce64-32b0-4bd9-8b58-8cbb2b089959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"183a7d90-8d70-4089-a322-183d9492d4cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca37cdf1-ff81-4334-a44a-5a3405bd31d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"155a735d-0c4e-4154-b4ac-55d1248c9018","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"76dab104-9774-47c6-b3fb-2e57f13a837f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-537000\" primary control-plane node in \"download-only-537000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"40ce722c-f7be-4aba-a28c-dfba22ec743e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"10d838d6-bb30-4799-a3aa-732e00c0c5d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80] Decompressors:map[bz2:0x140007cb300 gz:0x140007cb308 tar:0x140007cb2b0 tar.bz2:0x140007cb2c0 tar.gz:0x140007cb2d0 tar.xz:0x140007cb2e0 tar.zst:0x140007cb2f0 tbz2:0x140007cb2c0 tgz:0x14
0007cb2d0 txz:0x140007cb2e0 tzst:0x140007cb2f0 xz:0x140007cb310 zip:0x140007cb320 zst:0x140007cb318] Getters:map[file:0x140017846d0 http:0x140000b4cd0 https:0x140000b4d20] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"fa708ac5-6e94-4258-bcaf-e64451132c18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:14:43.236476    7070 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:14:43.236636    7070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:43.236640    7070 out.go:304] Setting ErrFile to fd 2...
	I0731 12:14:43.236642    7070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:43.236753    7070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	W0731 12:14:43.236840    7070 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19360-6578/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19360-6578/.minikube/config/config.json: no such file or directory
	I0731 12:14:43.238127    7070 out.go:298] Setting JSON to true
	I0731 12:14:43.254261    7070 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4452,"bootTime":1722448831,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:14:43.254336    7070 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:14:43.260052    7070 out.go:97] [download-only-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:14:43.260209    7070 notify.go:220] Checking for updates...
	W0731 12:14:43.260273    7070 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 12:14:43.263780    7070 out.go:169] MINIKUBE_LOCATION=19360
	I0731 12:14:43.267036    7070 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:14:43.271999    7070 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:14:43.274926    7070 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:14:43.278033    7070 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	W0731 12:14:43.282398    7070 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:14:43.282593    7070 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:14:43.285971    7070 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:14:43.285992    7070 start.go:297] selected driver: qemu2
	I0731 12:14:43.285994    7070 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:14:43.286062    7070 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:14:43.289028    7070 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:14:43.294191    7070 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:14:43.294313    7070 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:14:43.294327    7070 cni.go:84] Creating CNI manager for ""
	I0731 12:14:43.294344    7070 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:14:43.294395    7070 start.go:340] cluster config:
	{Name:download-only-537000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:14:43.298096    7070 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:14:43.301998    7070 out.go:97] Downloading VM boot image ...
	I0731 12:14:43.302012    7070 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 12:14:54.013201    7070 out.go:97] Starting "download-only-537000" primary control-plane node in "download-only-537000" cluster
	I0731 12:14:54.013225    7070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:54.079023    7070 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:54.079032    7070 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:54.079889    7070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:54.085131    7070 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 12:14:54.085139    7070 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:54.167943    7070 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:15:01.338286    7070 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:01.338444    7070 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:02.033226    7070 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:15:02.033427    7070 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-537000/config.json ...
	I0731 12:15:02.033444    7070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-537000/config.json: {Name:mk119f50d348b283632d10c30f43558feb9f07f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:15:02.033659    7070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:15:02.034485    7070 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 12:15:02.414392    7070 out.go:169] 
	W0731 12:15:02.418570    7070 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80] Decompressors:map[bz2:0x140007cb300 gz:0x140007cb308 tar:0x140007cb2b0 tar.bz2:0x140007cb2c0 tar.gz:0x140007cb2d0 tar.xz:0x140007cb2e0 tar.zst:0x140007cb2f0 tbz2:0x140007cb2c0 tgz:0x140007cb2d0 txz:0x140007cb2e0 tzst:0x140007cb2f0 xz:0x140007cb310 zip:0x140007cb320 zst:0x140007cb318] Getters:map[file:0x140017846d0 http:0x140000b4cd0 https:0x140000b4d20] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 12:15:02.418591    7070 out_reason.go:110] 
	W0731 12:15:02.427314    7070 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:15:02.431461    7070 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-537000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (19.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-353000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-353000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.903363792s)

                                                
                                                
-- stdout --
	* [offline-docker-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-353000" primary control-plane node in "offline-docker-353000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-353000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:26:42.737290    8471 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:26:42.737450    8471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:42.737454    8471 out.go:304] Setting ErrFile to fd 2...
	I0731 12:26:42.737457    8471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:42.737589    8471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:26:42.738811    8471 out.go:298] Setting JSON to false
	I0731 12:26:42.756336    8471 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5171,"bootTime":1722448831,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:26:42.756411    8471 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:26:42.761253    8471 out.go:177] * [offline-docker-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:26:42.769323    8471 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:26:42.769362    8471 notify.go:220] Checking for updates...
	I0731 12:26:42.775229    8471 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:26:42.778254    8471 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:26:42.782228    8471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:26:42.785289    8471 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:26:42.788273    8471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:26:42.791620    8471 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:26:42.791694    8471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:26:42.796203    8471 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:26:42.803255    8471 start.go:297] selected driver: qemu2
	I0731 12:26:42.803265    8471 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:26:42.803272    8471 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:26:42.805238    8471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:26:42.808251    8471 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:26:42.811437    8471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:26:42.811457    8471 cni.go:84] Creating CNI manager for ""
	I0731 12:26:42.811466    8471 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:26:42.811475    8471 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:26:42.811513    8471 start.go:340] cluster config:
	{Name:offline-docker-353000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:26:42.815190    8471 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:42.823257    8471 out.go:177] * Starting "offline-docker-353000" primary control-plane node in "offline-docker-353000" cluster
	I0731 12:26:42.827198    8471 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:26:42.827229    8471 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:26:42.827240    8471 cache.go:56] Caching tarball of preloaded images
	I0731 12:26:42.827310    8471 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:26:42.827315    8471 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:26:42.827380    8471 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/offline-docker-353000/config.json ...
	I0731 12:26:42.827390    8471 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/offline-docker-353000/config.json: {Name:mk393629b7edf22d9f4b525b80e52d313cf0c131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:26:42.827681    8471 start.go:360] acquireMachinesLock for offline-docker-353000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:42.827714    8471 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "offline-docker-353000"
	I0731 12:26:42.827725    8471 start.go:93] Provisioning new machine with config: &{Name:offline-docker-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:26:42.827754    8471 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:26:42.834229    8471 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:26:42.850085    8471 start.go:159] libmachine.API.Create for "offline-docker-353000" (driver="qemu2")
	I0731 12:26:42.850119    8471 client.go:168] LocalClient.Create starting
	I0731 12:26:42.850195    8471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:26:42.850227    8471 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:42.850238    8471 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:42.850286    8471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:26:42.850309    8471 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:42.850316    8471 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:42.850730    8471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:26:42.999871    8471 main.go:141] libmachine: Creating SSH key...
	I0731 12:26:43.088188    8471 main.go:141] libmachine: Creating Disk image...
	I0731 12:26:43.088198    8471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:26:43.088380    8471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2
	I0731 12:26:43.098880    8471 main.go:141] libmachine: STDOUT: 
	I0731 12:26:43.098906    8471 main.go:141] libmachine: STDERR: 
	I0731 12:26:43.098989    8471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2 +20000M
	I0731 12:26:43.113641    8471 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:26:43.113660    8471 main.go:141] libmachine: STDERR: 
	I0731 12:26:43.113688    8471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2
	I0731 12:26:43.113693    8471 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:26:43.113708    8471 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:43.113735    8471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:a3:34:1d:27:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2
	I0731 12:26:43.115425    8471 main.go:141] libmachine: STDOUT: 
	I0731 12:26:43.115441    8471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:26:43.115459    8471 client.go:171] duration metric: took 265.339708ms to LocalClient.Create
	I0731 12:26:45.115790    8471 start.go:128] duration metric: took 2.288060834s to createHost
	I0731 12:26:45.115804    8471 start.go:83] releasing machines lock for "offline-docker-353000", held for 2.288122125s
	W0731 12:26:45.115816    8471 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:45.125339    8471 out.go:177] * Deleting "offline-docker-353000" in qemu2 ...
	W0731 12:26:45.145679    8471 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:45.145686    8471 start.go:729] Will try again in 5 seconds ...
	I0731 12:26:50.147840    8471 start.go:360] acquireMachinesLock for offline-docker-353000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:50.148382    8471 start.go:364] duration metric: took 411.625µs to acquireMachinesLock for "offline-docker-353000"
	I0731 12:26:50.148524    8471 start.go:93] Provisioning new machine with config: &{Name:offline-docker-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:26:50.148951    8471 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:26:50.162380    8471 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:26:50.212185    8471 start.go:159] libmachine.API.Create for "offline-docker-353000" (driver="qemu2")
	I0731 12:26:50.212241    8471 client.go:168] LocalClient.Create starting
	I0731 12:26:50.212352    8471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:26:50.212411    8471 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:50.212426    8471 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:50.212489    8471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:26:50.212538    8471 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:50.212551    8471 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:50.213058    8471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:26:50.373537    8471 main.go:141] libmachine: Creating SSH key...
	I0731 12:26:50.550636    8471 main.go:141] libmachine: Creating Disk image...
	I0731 12:26:50.550649    8471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:26:50.550894    8471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2
	I0731 12:26:50.560463    8471 main.go:141] libmachine: STDOUT: 
	I0731 12:26:50.560483    8471 main.go:141] libmachine: STDERR: 
	I0731 12:26:50.560548    8471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2 +20000M
	I0731 12:26:50.568373    8471 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:26:50.568387    8471 main.go:141] libmachine: STDERR: 
	I0731 12:26:50.568404    8471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2
	I0731 12:26:50.568408    8471 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:26:50.568421    8471 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:50.568446    8471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4e:7f:fc:ea:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/offline-docker-353000/disk.qcow2
	I0731 12:26:50.570001    8471 main.go:141] libmachine: STDOUT: 
	I0731 12:26:50.570017    8471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:26:50.570030    8471 client.go:171] duration metric: took 357.789042ms to LocalClient.Create
	I0731 12:26:52.572158    8471 start.go:128] duration metric: took 2.4232205s to createHost
	I0731 12:26:52.572312    8471 start.go:83] releasing machines lock for "offline-docker-353000", held for 2.423942416s
	W0731 12:26:52.572652    8471 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:52.586241    8471 out.go:177] 
	W0731 12:26:52.589410    8471 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:26:52.589430    8471 out.go:239] * 
	* 
	W0731 12:26:52.591260    8471 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:26:52.600141    8471 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-353000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-31 12:26:52.611953 -0700 PDT m=+729.466095043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-353000 -n offline-docker-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-353000 -n offline-docker-353000: exit status 7 (48.437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-353000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-353000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-353000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestAddons/Setup (10.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-728000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-728000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.248980666s)

                                                
                                                
-- stdout --
	* [addons-728000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-728000" primary control-plane node in "addons-728000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-728000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:15:30.469630    7176 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:15:30.469749    7176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:30.469751    7176 out.go:304] Setting ErrFile to fd 2...
	I0731 12:15:30.469754    7176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:30.469885    7176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:15:30.470954    7176 out.go:298] Setting JSON to false
	I0731 12:15:30.487267    7176 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4499,"bootTime":1722448831,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:15:30.487334    7176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:15:30.492442    7176 out.go:177] * [addons-728000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:15:30.499494    7176 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:15:30.499554    7176 notify.go:220] Checking for updates...
	I0731 12:15:30.507454    7176 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:15:30.510476    7176 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:15:30.513646    7176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:15:30.516493    7176 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:15:30.519535    7176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:15:30.522645    7176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:15:30.526414    7176 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:15:30.533370    7176 start.go:297] selected driver: qemu2
	I0731 12:15:30.533378    7176 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:15:30.533386    7176 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:15:30.535777    7176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:15:30.538500    7176 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:15:30.541625    7176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:15:30.541656    7176 cni.go:84] Creating CNI manager for ""
	I0731 12:15:30.541663    7176 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:15:30.541667    7176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:15:30.541697    7176 start.go:340] cluster config:
	{Name:addons-728000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:15:30.545611    7176 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:15:30.553466    7176 out.go:177] * Starting "addons-728000" primary control-plane node in "addons-728000" cluster
	I0731 12:15:30.557565    7176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:15:30.557585    7176 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:15:30.557598    7176 cache.go:56] Caching tarball of preloaded images
	I0731 12:15:30.557682    7176 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:15:30.557688    7176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:15:30.557899    7176 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/addons-728000/config.json ...
	I0731 12:15:30.557911    7176 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/addons-728000/config.json: {Name:mk6f169d523393d4a7d841dd4fea6344411290bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:15:30.558471    7176 start.go:360] acquireMachinesLock for addons-728000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:15:30.558548    7176 start.go:364] duration metric: took 71µs to acquireMachinesLock for "addons-728000"
	I0731 12:15:30.558563    7176 start.go:93] Provisioning new machine with config: &{Name:addons-728000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:15:30.558603    7176 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:15:30.567450    7176 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 12:15:30.585823    7176 start.go:159] libmachine.API.Create for "addons-728000" (driver="qemu2")
	I0731 12:15:30.585852    7176 client.go:168] LocalClient.Create starting
	I0731 12:15:30.585972    7176 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:15:30.729399    7176 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:15:30.779705    7176 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:15:31.038573    7176 main.go:141] libmachine: Creating SSH key...
	I0731 12:15:31.158789    7176 main.go:141] libmachine: Creating Disk image...
	I0731 12:15:31.158795    7176 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:15:31.159022    7176 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2
	I0731 12:15:31.168557    7176 main.go:141] libmachine: STDOUT: 
	I0731 12:15:31.168575    7176 main.go:141] libmachine: STDERR: 
	I0731 12:15:31.168623    7176 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2 +20000M
	I0731 12:15:31.176439    7176 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:15:31.176461    7176 main.go:141] libmachine: STDERR: 
	I0731 12:15:31.176474    7176 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2
	I0731 12:15:31.176480    7176 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:15:31.176506    7176 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:15:31.176538    7176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8f:a1:2e:74:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2
	I0731 12:15:31.178252    7176 main.go:141] libmachine: STDOUT: 
	I0731 12:15:31.178269    7176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:15:31.178296    7176 client.go:171] duration metric: took 592.439709ms to LocalClient.Create
	I0731 12:15:33.180475    7176 start.go:128] duration metric: took 2.621893083s to createHost
	I0731 12:15:33.180520    7176 start.go:83] releasing machines lock for "addons-728000", held for 2.622003916s
	W0731 12:15:33.180597    7176 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:15:33.191743    7176 out.go:177] * Deleting "addons-728000" in qemu2 ...
	W0731 12:15:33.222960    7176 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:15:33.223016    7176 start.go:729] Will try again in 5 seconds ...
	I0731 12:15:38.225086    7176 start.go:360] acquireMachinesLock for addons-728000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:15:38.225662    7176 start.go:364] duration metric: took 384.875µs to acquireMachinesLock for "addons-728000"
	I0731 12:15:38.225809    7176 start.go:93] Provisioning new machine with config: &{Name:addons-728000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:15:38.229910    7176 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:15:38.237153    7176 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 12:15:38.285389    7176 start.go:159] libmachine.API.Create for "addons-728000" (driver="qemu2")
	I0731 12:15:38.285434    7176 client.go:168] LocalClient.Create starting
	I0731 12:15:38.285571    7176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:15:38.285639    7176 main.go:141] libmachine: Decoding PEM data...
	I0731 12:15:38.285658    7176 main.go:141] libmachine: Parsing certificate...
	I0731 12:15:38.285762    7176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:15:38.285808    7176 main.go:141] libmachine: Decoding PEM data...
	I0731 12:15:38.285822    7176 main.go:141] libmachine: Parsing certificate...
	I0731 12:15:38.286398    7176 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:15:38.442754    7176 main.go:141] libmachine: Creating SSH key...
	I0731 12:15:38.624902    7176 main.go:141] libmachine: Creating Disk image...
	I0731 12:15:38.624909    7176 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:15:38.625162    7176 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2
	I0731 12:15:38.635023    7176 main.go:141] libmachine: STDOUT: 
	I0731 12:15:38.635040    7176 main.go:141] libmachine: STDERR: 
	I0731 12:15:38.635104    7176 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2 +20000M
	I0731 12:15:38.642976    7176 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:15:38.642993    7176 main.go:141] libmachine: STDERR: 
	I0731 12:15:38.643007    7176 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2
	I0731 12:15:38.643012    7176 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:15:38.643023    7176 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:15:38.643053    7176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:25:64:e6:fd:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/addons-728000/disk.qcow2
	I0731 12:15:38.644731    7176 main.go:141] libmachine: STDOUT: 
	I0731 12:15:38.644748    7176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:15:38.644760    7176 client.go:171] duration metric: took 359.325333ms to LocalClient.Create
	I0731 12:15:40.646994    7176 start.go:128] duration metric: took 2.41704975s to createHost
	I0731 12:15:40.647063    7176 start.go:83] releasing machines lock for "addons-728000", held for 2.421400583s
	W0731 12:15:40.647462    7176 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-728000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-728000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:15:40.656818    7176 out.go:177] 
	W0731 12:15:40.663997    7176 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:15:40.664022    7176 out.go:239] * 
	* 
	W0731 12:15:40.666916    7176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:15:40.676855    7176 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-728000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.25s)

                                                
                                    
x
+
TestCertOptions (10.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-884000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-884000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.785211333s)

                                                
                                                
-- stdout --
	* [cert-options-884000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-884000" primary control-plane node in "cert-options-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-884000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-884000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-884000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.343042ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-884000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-884000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-884000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-884000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-884000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-884000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.955333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-884000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-884000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-884000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-884000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-884000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-31 12:38:18.892875 -0700 PDT m=+1415.765539251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-884000 -n cert-options-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-884000 -n cert-options-884000: exit status 7 (29.716375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-884000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-884000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-884000
--- FAIL: TestCertOptions (10.04s)

                                                
                                    
x
+
TestCertExpiration (196.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-505000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-505000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.848651875s)

                                                
                                                
-- stdout --
	* [cert-expiration-505000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-505000" primary control-plane node in "cert-expiration-505000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-505000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-505000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-505000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-505000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-505000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.245385334s)

                                                
                                                
-- stdout --
	* [cert-expiration-505000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-505000" primary control-plane node in "cert-expiration-505000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-505000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-505000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-505000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-505000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-505000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-505000" primary control-plane node in "cert-expiration-505000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-505000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-505000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-505000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-31 12:41:08.871211 -0700 PDT m=+1585.754382793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-505000 -n cert-expiration-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-505000 -n cert-expiration-505000: exit status 7 (66.613917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-505000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-505000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-505000
--- FAIL: TestCertExpiration (196.24s)

                                                
                                    
x
+
TestDockerFlags (10.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-558000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-558000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.836573791s)

                                                
                                                
-- stdout --
	* [docker-flags-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-558000" primary control-plane node in "docker-flags-558000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:58.920087    9315 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:58.920236    9315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:58.920239    9315 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:58.920242    9315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:58.920374    9315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:37:58.921455    9315 out.go:298] Setting JSON to false
	I0731 12:37:58.937580    9315 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5847,"bootTime":1722448831,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:37:58.937650    9315 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:58.943667    9315 out.go:177] * [docker-flags-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:58.951633    9315 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:37:58.951690    9315 notify.go:220] Checking for updates...
	I0731 12:37:58.957496    9315 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:37:58.960603    9315 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:58.964603    9315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:58.967598    9315 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:37:58.970678    9315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:37:58.974011    9315 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:58.974083    9315 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:58.974141    9315 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:58.977546    9315 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:37:58.984631    9315 start.go:297] selected driver: qemu2
	I0731 12:37:58.984639    9315 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:37:58.984656    9315 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:37:58.987030    9315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:37:58.990560    9315 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:37:58.993666    9315 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0731 12:37:58.993706    9315 cni.go:84] Creating CNI manager for ""
	I0731 12:37:58.993715    9315 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:37:58.993723    9315 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:37:58.993750    9315 start.go:340] cluster config:
	{Name:docker-flags-558000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:37:58.997579    9315 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:37:59.006641    9315 out.go:177] * Starting "docker-flags-558000" primary control-plane node in "docker-flags-558000" cluster
	I0731 12:37:59.010643    9315 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:37:59.010663    9315 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:37:59.010674    9315 cache.go:56] Caching tarball of preloaded images
	I0731 12:37:59.010750    9315 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:37:59.010757    9315 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:37:59.010825    9315 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/docker-flags-558000/config.json ...
	I0731 12:37:59.010836    9315 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/docker-flags-558000/config.json: {Name:mk7051f7441f9eb9ecdf825e2a927767f26e41f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:37:59.011067    9315 start.go:360] acquireMachinesLock for docker-flags-558000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:59.011107    9315 start.go:364] duration metric: took 32.209µs to acquireMachinesLock for "docker-flags-558000"
	I0731 12:37:59.011118    9315 start.go:93] Provisioning new machine with config: &{Name:docker-flags-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:59.011159    9315 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:59.019611    9315 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:37:59.038478    9315 start.go:159] libmachine.API.Create for "docker-flags-558000" (driver="qemu2")
	I0731 12:37:59.038513    9315 client.go:168] LocalClient.Create starting
	I0731 12:37:59.038572    9315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:37:59.038606    9315 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:59.038620    9315 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:59.038665    9315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:37:59.038689    9315 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:59.038698    9315 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:59.039127    9315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:37:59.189246    9315 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:59.268365    9315 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:59.268371    9315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:59.268588    9315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2
	I0731 12:37:59.277616    9315 main.go:141] libmachine: STDOUT: 
	I0731 12:37:59.277631    9315 main.go:141] libmachine: STDERR: 
	I0731 12:37:59.277681    9315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2 +20000M
	I0731 12:37:59.285483    9315 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:59.285495    9315 main.go:141] libmachine: STDERR: 
	I0731 12:37:59.285512    9315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2
	I0731 12:37:59.285515    9315 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:59.285527    9315 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:59.285556    9315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ef:d7:aa:73:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2
	I0731 12:37:59.287148    9315 main.go:141] libmachine: STDOUT: 
	I0731 12:37:59.287160    9315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:59.287177    9315 client.go:171] duration metric: took 248.662458ms to LocalClient.Create
	I0731 12:38:01.289357    9315 start.go:128] duration metric: took 2.278208083s to createHost
	I0731 12:38:01.289447    9315 start.go:83] releasing machines lock for "docker-flags-558000", held for 2.278363958s
	W0731 12:38:01.289526    9315 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:01.314627    9315 out.go:177] * Deleting "docker-flags-558000" in qemu2 ...
	W0731 12:38:01.335603    9315 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:01.335629    9315 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:06.337804    9315 start.go:360] acquireMachinesLock for docker-flags-558000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:06.338242    9315 start.go:364] duration metric: took 323.375µs to acquireMachinesLock for "docker-flags-558000"
	I0731 12:38:06.338353    9315 start.go:93] Provisioning new machine with config: &{Name:docker-flags-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:06.338630    9315 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:06.344346    9315 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:38:06.391963    9315 start.go:159] libmachine.API.Create for "docker-flags-558000" (driver="qemu2")
	I0731 12:38:06.392014    9315 client.go:168] LocalClient.Create starting
	I0731 12:38:06.392108    9315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:06.392163    9315 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:06.392181    9315 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:06.392245    9315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:06.392275    9315 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:06.392286    9315 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:06.393135    9315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:06.554457    9315 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:06.665592    9315 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:06.665598    9315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:06.665789    9315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2
	I0731 12:38:06.674984    9315 main.go:141] libmachine: STDOUT: 
	I0731 12:38:06.674998    9315 main.go:141] libmachine: STDERR: 
	I0731 12:38:06.675039    9315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2 +20000M
	I0731 12:38:06.682875    9315 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:06.682889    9315 main.go:141] libmachine: STDERR: 
	I0731 12:38:06.682901    9315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2
	I0731 12:38:06.682905    9315 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:06.682913    9315 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:06.682941    9315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:fe:17:5d:c8:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/docker-flags-558000/disk.qcow2
	I0731 12:38:06.684605    9315 main.go:141] libmachine: STDOUT: 
	I0731 12:38:06.684616    9315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:06.684629    9315 client.go:171] duration metric: took 292.615583ms to LocalClient.Create
	I0731 12:38:08.685529    9315 start.go:128] duration metric: took 2.34814625s to createHost
	I0731 12:38:08.685609    9315 start.go:83] releasing machines lock for "docker-flags-558000", held for 2.348621791s
	W0731 12:38:08.686064    9315 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:08.695613    9315 out.go:177] 
	W0731 12:38:08.703532    9315 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:08.703556    9315 out.go:239] * 
	* 
	W0731 12:38:08.706216    9315 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:08.713533    9315 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-558000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-558000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-558000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.33675ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-558000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-558000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-558000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-558000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-558000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-558000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-558000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-558000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.723209ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-558000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-558000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-558000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-558000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-558000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-31 12:38:08.853932 -0700 PDT m=+1405.720235459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-558000 -n docker-flags-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-558000 -n docker-flags-558000: exit status 7 (29.577125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-558000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-558000
--- FAIL: TestDockerFlags (10.07s)

                                                
                                    
x
+
TestForceSystemdFlag (11.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-812000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-812000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.315933208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-812000" primary control-plane node in "force-systemd-flag-812000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-812000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:23.709244    9167 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:23.709365    9167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:23.709368    9167 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:23.709371    9167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:23.709517    9167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:37:23.710608    9167 out.go:298] Setting JSON to false
	I0731 12:37:23.726938    9167 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5812,"bootTime":1722448831,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:37:23.727011    9167 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:23.731827    9167 out.go:177] * [force-systemd-flag-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:23.739902    9167 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:37:23.739950    9167 notify.go:220] Checking for updates...
	I0731 12:37:23.745811    9167 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:37:23.748831    9167 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:23.752815    9167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:23.755783    9167 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:37:23.758826    9167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:37:23.762102    9167 config.go:182] Loaded profile config "NoKubernetes-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:23.762170    9167 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:23.762219    9167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:23.765762    9167 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:37:23.772747    9167 start.go:297] selected driver: qemu2
	I0731 12:37:23.772753    9167 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:37:23.772759    9167 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:37:23.775238    9167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:37:23.779821    9167 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:37:23.782860    9167 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:37:23.782875    9167 cni.go:84] Creating CNI manager for ""
	I0731 12:37:23.782881    9167 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:37:23.782884    9167 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:37:23.782908    9167 start.go:340] cluster config:
	{Name:force-systemd-flag-812000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:37:23.786744    9167 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:37:23.794781    9167 out.go:177] * Starting "force-systemd-flag-812000" primary control-plane node in "force-systemd-flag-812000" cluster
	I0731 12:37:23.798770    9167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:37:23.798788    9167 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:37:23.798799    9167 cache.go:56] Caching tarball of preloaded images
	I0731 12:37:23.798882    9167 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:37:23.798888    9167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:37:23.798951    9167 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/force-systemd-flag-812000/config.json ...
	I0731 12:37:23.798969    9167 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/force-systemd-flag-812000/config.json: {Name:mkb6452035279942e8cd014e4d9a8e7fb12bd19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:37:23.799373    9167 start.go:360] acquireMachinesLock for force-systemd-flag-812000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:25.124539    9167 start.go:364] duration metric: took 1.325142833s to acquireMachinesLock for "force-systemd-flag-812000"
	I0731 12:37:25.124622    9167 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:25.124821    9167 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:25.133341    9167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:37:25.181463    9167 start.go:159] libmachine.API.Create for "force-systemd-flag-812000" (driver="qemu2")
	I0731 12:37:25.181507    9167 client.go:168] LocalClient.Create starting
	I0731 12:37:25.181672    9167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:37:25.181731    9167 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:25.181754    9167 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:25.181814    9167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:37:25.181859    9167 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:25.181877    9167 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:25.182572    9167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:37:25.357564    9167 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:25.511964    9167 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:25.511970    9167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:25.512209    9167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2
	I0731 12:37:25.521724    9167 main.go:141] libmachine: STDOUT: 
	I0731 12:37:25.521737    9167 main.go:141] libmachine: STDERR: 
	I0731 12:37:25.521785    9167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2 +20000M
	I0731 12:37:25.529651    9167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:25.529662    9167 main.go:141] libmachine: STDERR: 
	I0731 12:37:25.529685    9167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2
	I0731 12:37:25.529694    9167 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:25.529707    9167 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:25.529738    9167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:55:3b:de:e9:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2
	I0731 12:37:25.531435    9167 main.go:141] libmachine: STDOUT: 
	I0731 12:37:25.531449    9167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:25.531466    9167 client.go:171] duration metric: took 349.960709ms to LocalClient.Create
	I0731 12:37:27.533643    9167 start.go:128] duration metric: took 2.408831208s to createHost
	I0731 12:37:27.533686    9167 start.go:83] releasing machines lock for "force-systemd-flag-812000", held for 2.409148s
	W0731 12:37:27.533747    9167 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:27.540853    9167 out.go:177] * Deleting "force-systemd-flag-812000" in qemu2 ...
	W0731 12:37:27.575547    9167 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:27.575576    9167 start.go:729] Will try again in 5 seconds ...
	I0731 12:37:32.577051    9167 start.go:360] acquireMachinesLock for force-systemd-flag-812000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:32.577126    9167 start.go:364] duration metric: took 50.709µs to acquireMachinesLock for "force-systemd-flag-812000"
	I0731 12:37:32.577142    9167 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:32.577196    9167 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:32.586096    9167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:37:32.602079    9167 start.go:159] libmachine.API.Create for "force-systemd-flag-812000" (driver="qemu2")
	I0731 12:37:32.602109    9167 client.go:168] LocalClient.Create starting
	I0731 12:37:32.602167    9167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:37:32.602194    9167 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:32.602203    9167 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:32.602234    9167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:37:32.602249    9167 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:32.602259    9167 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:32.602498    9167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:37:32.835582    9167 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:32.932212    9167 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:32.932218    9167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:32.932466    9167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2
	I0731 12:37:32.941736    9167 main.go:141] libmachine: STDOUT: 
	I0731 12:37:32.941755    9167 main.go:141] libmachine: STDERR: 
	I0731 12:37:32.941800    9167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2 +20000M
	I0731 12:37:32.949613    9167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:32.949649    9167 main.go:141] libmachine: STDERR: 
	I0731 12:37:32.949660    9167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2
	I0731 12:37:32.949666    9167 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:32.949672    9167 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:32.949701    9167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:b3:fe:b0:ea:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-flag-812000/disk.qcow2
	I0731 12:37:32.951363    9167 main.go:141] libmachine: STDOUT: 
	I0731 12:37:32.951383    9167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:32.951397    9167 client.go:171] duration metric: took 349.28925ms to LocalClient.Create
	I0731 12:37:34.953669    9167 start.go:128] duration metric: took 2.376462166s to createHost
	I0731 12:37:34.953741    9167 start.go:83] releasing machines lock for "force-systemd-flag-812000", held for 2.376642333s
	W0731 12:37:34.954095    9167 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:34.969499    9167 out.go:177] 
	W0731 12:37:34.972598    9167 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:37:34.972625    9167 out.go:239] * 
	* 
	W0731 12:37:34.975359    9167 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:37:34.983471    9167 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-812000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-812000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-812000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.948167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-812000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-812000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-812000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-31 12:37:35.076979 -0700 PDT m=+1371.941362793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-812000 -n force-systemd-flag-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-812000 -n force-systemd-flag-812000: exit status 7 (34.82675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-812000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-812000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-812000
--- FAIL: TestForceSystemdFlag (11.51s)

                                                
                                    
x
+
TestForceSystemdEnv (10.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-493000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-493000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.894635542s)

                                                
                                                
-- stdout --
	* [force-systemd-env-493000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-493000" primary control-plane node in "force-systemd-env-493000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-493000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:48.829674    9274 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:48.829803    9274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:48.829806    9274 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:48.829808    9274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:48.829967    9274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:37:48.831071    9274 out.go:298] Setting JSON to false
	I0731 12:37:48.847143    9274 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5837,"bootTime":1722448831,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:37:48.847213    9274 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:48.854051    9274 out.go:177] * [force-systemd-env-493000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:48.861006    9274 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:37:48.861057    9274 notify.go:220] Checking for updates...
	I0731 12:37:48.868967    9274 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:37:48.872937    9274 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:48.876934    9274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:48.880030    9274 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:37:48.882894    9274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0731 12:37:48.886245    9274 config.go:182] Loaded profile config "NoKubernetes-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0731 12:37:48.886317    9274 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:48.886369    9274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:48.891003    9274 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:37:48.897961    9274 start.go:297] selected driver: qemu2
	I0731 12:37:48.897967    9274 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:37:48.897972    9274 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:37:48.900361    9274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:37:48.904974    9274 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:37:48.908046    9274 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:37:48.908060    9274 cni.go:84] Creating CNI manager for ""
	I0731 12:37:48.908067    9274 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:37:48.908071    9274 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:37:48.908107    9274 start.go:340] cluster config:
	{Name:force-systemd-env-493000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:37:48.911796    9274 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:37:48.919963    9274 out.go:177] * Starting "force-systemd-env-493000" primary control-plane node in "force-systemd-env-493000" cluster
	I0731 12:37:48.923942    9274 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:37:48.923967    9274 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:37:48.923978    9274 cache.go:56] Caching tarball of preloaded images
	I0731 12:37:48.924040    9274 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:37:48.924045    9274 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:37:48.924105    9274 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/force-systemd-env-493000/config.json ...
	I0731 12:37:48.924115    9274 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/force-systemd-env-493000/config.json: {Name:mkc989a96575443d774973a648b5353be676c256 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:37:48.924463    9274 start.go:360] acquireMachinesLock for force-systemd-env-493000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:48.924499    9274 start.go:364] duration metric: took 27µs to acquireMachinesLock for "force-systemd-env-493000"
	I0731 12:37:48.924513    9274 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-493000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:48.924544    9274 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:48.927921    9274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:37:48.945418    9274 start.go:159] libmachine.API.Create for "force-systemd-env-493000" (driver="qemu2")
	I0731 12:37:48.945447    9274 client.go:168] LocalClient.Create starting
	I0731 12:37:48.945505    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:37:48.945537    9274 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:48.945546    9274 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:48.945584    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:37:48.945606    9274 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:48.945614    9274 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:48.945970    9274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:37:49.096596    9274 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:49.145953    9274 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:49.145958    9274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:49.146178    9274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2
	I0731 12:37:49.155184    9274 main.go:141] libmachine: STDOUT: 
	I0731 12:37:49.155202    9274 main.go:141] libmachine: STDERR: 
	I0731 12:37:49.155246    9274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2 +20000M
	I0731 12:37:49.163018    9274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:49.163032    9274 main.go:141] libmachine: STDERR: 
	I0731 12:37:49.163048    9274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2
	I0731 12:37:49.163055    9274 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:49.163073    9274 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:49.163097    9274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d8:e8:1e:3d:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2
	I0731 12:37:49.164675    9274 main.go:141] libmachine: STDOUT: 
	I0731 12:37:49.164690    9274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:49.164710    9274 client.go:171] duration metric: took 219.262042ms to LocalClient.Create
	I0731 12:37:51.166880    9274 start.go:128] duration metric: took 2.242352625s to createHost
	I0731 12:37:51.167013    9274 start.go:83] releasing machines lock for "force-systemd-env-493000", held for 2.242482459s
	W0731 12:37:51.167074    9274 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:51.181148    9274 out.go:177] * Deleting "force-systemd-env-493000" in qemu2 ...
	W0731 12:37:51.206803    9274 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:51.206836    9274 start.go:729] Will try again in 5 seconds ...
	I0731 12:37:56.208983    9274 start.go:360] acquireMachinesLock for force-systemd-env-493000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:56.209565    9274 start.go:364] duration metric: took 484.208µs to acquireMachinesLock for "force-systemd-env-493000"
	I0731 12:37:56.209761    9274 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-493000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:56.210050    9274 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:56.219480    9274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:37:56.271001    9274 start.go:159] libmachine.API.Create for "force-systemd-env-493000" (driver="qemu2")
	I0731 12:37:56.271082    9274 client.go:168] LocalClient.Create starting
	I0731 12:37:56.271257    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:37:56.271322    9274 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:56.271338    9274 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:56.271396    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:37:56.271443    9274 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:56.271463    9274 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:56.272030    9274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:37:56.440598    9274 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:56.633722    9274 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:56.633729    9274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:56.633957    9274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2
	I0731 12:37:56.643390    9274 main.go:141] libmachine: STDOUT: 
	I0731 12:37:56.643409    9274 main.go:141] libmachine: STDERR: 
	I0731 12:37:56.643457    9274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2 +20000M
	I0731 12:37:56.651289    9274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:56.651302    9274 main.go:141] libmachine: STDERR: 
	I0731 12:37:56.651312    9274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2
	I0731 12:37:56.651318    9274 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:56.651331    9274 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:56.651370    9274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b7:d7:e3:a3:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/force-systemd-env-493000/disk.qcow2
	I0731 12:37:56.652973    9274 main.go:141] libmachine: STDOUT: 
	I0731 12:37:56.652989    9274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:56.653003    9274 client.go:171] duration metric: took 381.910458ms to LocalClient.Create
	I0731 12:37:58.655142    9274 start.go:128] duration metric: took 2.445105541s to createHost
	I0731 12:37:58.655215    9274 start.go:83] releasing machines lock for "force-systemd-env-493000", held for 2.445632125s
	W0731 12:37:58.655599    9274 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-493000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-493000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:58.666258    9274 out.go:177] 
	W0731 12:37:58.670188    9274 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:37:58.670217    9274 out.go:239] * 
	* 
	W0731 12:37:58.672944    9274 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:37:58.682143    9274 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-493000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-493000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-493000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.47325ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-493000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-493000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-493000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-31 12:37:58.779751 -0700 PDT m=+1395.644512668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-493000 -n force-systemd-env-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-493000 -n force-systemd-env-493000: exit status 7 (33.044458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-493000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-493000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-493000
--- FAIL: TestForceSystemdEnv (10.09s)

                                                
                                    
x
+
TestErrorSpam/setup (9.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-924000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-924000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 --driver=qemu2 : exit status 80 (9.926401625s)

                                                
                                                
-- stdout --
	* [nospam-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-924000" primary control-plane node in "nospam-924000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-924000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-924000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-924000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-924000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19360
- KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-924000" primary control-plane node in "nospam-924000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-924000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-924000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.93s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-419000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-419000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.847912583s)

                                                
                                                
-- stdout --
	* [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-419000" primary control-plane node in "functional-419000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-419000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51075 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51075 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51075 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-419000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19360
- KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-419000" primary control-plane node in "functional-419000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-419000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51075 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51075 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51075 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (67.773834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-419000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-419000 --alsologtostderr -v=8: exit status 80 (5.1816685s)

                                                
                                                
-- stdout --
	* [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-419000" primary control-plane node in "functional-419000" cluster
	* Restarting existing qemu2 VM for "functional-419000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-419000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:16:09.991306    7315 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:16:09.991438    7315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:09.991441    7315 out.go:304] Setting ErrFile to fd 2...
	I0731 12:16:09.991444    7315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:09.991592    7315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:16:09.992584    7315 out.go:298] Setting JSON to false
	I0731 12:16:10.008868    7315 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4538,"bootTime":1722448831,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:16:10.008942    7315 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:16:10.013722    7315 out.go:177] * [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:16:10.019575    7315 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:16:10.019627    7315 notify.go:220] Checking for updates...
	I0731 12:16:10.025473    7315 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:16:10.029445    7315 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:16:10.032531    7315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:16:10.035503    7315 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:16:10.038527    7315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:16:10.041790    7315 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:16:10.041840    7315 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:16:10.046430    7315 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:16:10.053524    7315 start.go:297] selected driver: qemu2
	I0731 12:16:10.053530    7315 start.go:901] validating driver "qemu2" against &{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:16:10.053583    7315 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:16:10.055870    7315 cni.go:84] Creating CNI manager for ""
	I0731 12:16:10.055883    7315 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:16:10.055921    7315 start.go:340] cluster config:
	{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:16:10.059446    7315 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:16:10.067444    7315 out.go:177] * Starting "functional-419000" primary control-plane node in "functional-419000" cluster
	I0731 12:16:10.070489    7315 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:16:10.070506    7315 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:16:10.070517    7315 cache.go:56] Caching tarball of preloaded images
	I0731 12:16:10.070575    7315 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:16:10.070582    7315 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:16:10.070650    7315 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/functional-419000/config.json ...
	I0731 12:16:10.071171    7315 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:16:10.071199    7315 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "functional-419000"
	I0731 12:16:10.071206    7315 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:16:10.071211    7315 fix.go:54] fixHost starting: 
	I0731 12:16:10.071329    7315 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
	W0731 12:16:10.071339    7315 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:16:10.074507    7315 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
	I0731 12:16:10.082430    7315 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:16:10.082463    7315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
	I0731 12:16:10.084473    7315 main.go:141] libmachine: STDOUT: 
	I0731 12:16:10.084495    7315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:16:10.084528    7315 fix.go:56] duration metric: took 13.317083ms for fixHost
	I0731 12:16:10.084532    7315 start.go:83] releasing machines lock for "functional-419000", held for 13.330084ms
	W0731 12:16:10.084541    7315 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:16:10.084571    7315 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:16:10.084576    7315 start.go:729] Will try again in 5 seconds ...
	I0731 12:16:15.086687    7315 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:16:15.087152    7315 start.go:364] duration metric: took 374.708µs to acquireMachinesLock for "functional-419000"
	I0731 12:16:15.087275    7315 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:16:15.087293    7315 fix.go:54] fixHost starting: 
	I0731 12:16:15.087925    7315 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
	W0731 12:16:15.087963    7315 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:16:15.094185    7315 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
	I0731 12:16:15.098213    7315 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:16:15.098471    7315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
	I0731 12:16:15.107222    7315 main.go:141] libmachine: STDOUT: 
	I0731 12:16:15.107291    7315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:16:15.107379    7315 fix.go:56] duration metric: took 20.08475ms for fixHost
	I0731 12:16:15.107436    7315 start.go:83] releasing machines lock for "functional-419000", held for 20.224541ms
	W0731 12:16:15.107646    7315 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:16:15.115244    7315 out.go:177] 
	W0731 12:16:15.119410    7315 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:16:15.119455    7315 out.go:239] * 
	* 
	W0731 12:16:15.121989    7315 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:16:15.129200    7315 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-419000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.183337375s for "functional-419000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (66.833291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.259459ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-419000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (30.494041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-419000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-419000 get po -A: exit status 1 (26.429917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-419000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-419000\n"*: args "kubectl --context functional-419000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-419000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (28.540583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl images: exit status 83 (38.862041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (40.650625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-419000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.936083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.883625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-419000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 kubectl -- --context functional-419000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 kubectl -- --context functional-419000 get pods: exit status 1 (707.658042ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-419000
	* no server found for cluster "functional-419000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-419000 kubectl -- --context functional-419000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (31.92275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-419000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-419000 get pods: exit status 1 (950.514542ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-419000
	* no server found for cluster "functional-419000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-419000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (904.585125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.86s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-419000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-419000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.197991625s)

                                                
                                                
-- stdout --
	* [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-419000" primary control-plane node in "functional-419000" cluster
	* Restarting existing qemu2 VM for "functional-419000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-419000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-419000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.198652959s for "functional-419000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (66.959334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-419000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-419000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.587208ms)

                                                
                                                
** stderr ** 
	error: context "functional-419000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-419000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (29.313708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 logs: exit status 83 (76.481792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-537000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-537000                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | -o=json --download-only                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | -p download-only-207000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-207000                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | -o=json --download-only                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | -p download-only-014000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-014000                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-537000                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-207000                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-014000                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | --download-only -p                                                       | binary-mirror-327000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | binary-mirror-327000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51044                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-327000                                                  | binary-mirror-327000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| addons  | enable dashboard -p                                                      | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | addons-728000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | addons-728000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-728000 --wait=true                                             | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-728000                                                         | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | -p nospam-924000 -n=1 --memory=2250 --wait=false                         | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-924000                                                         | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:16 PDT |
	| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | minikube-local-cache-test:functional-419000                              |                      |         |         |                     |                     |
	| cache   | functional-419000 cache delete                                           | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | minikube-local-cache-test:functional-419000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	| ssh     | functional-419000 ssh sudo                                               | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-419000                                                        | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-419000 ssh                                                    | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-419000 cache reload                                           | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	| ssh     | functional-419000 ssh                                                    | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-419000 kubectl --                                             | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | --context functional-419000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:16:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:16:21.019953    7391 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:16:21.020069    7391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:21.020071    7391 out.go:304] Setting ErrFile to fd 2...
	I0731 12:16:21.020073    7391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:21.020210    7391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:16:21.021202    7391 out.go:298] Setting JSON to false
	I0731 12:16:21.037528    7391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4550,"bootTime":1722448831,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:16:21.037591    7391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:16:21.044549    7391 out.go:177] * [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:16:21.052563    7391 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:16:21.052552    7391 notify.go:220] Checking for updates...
	I0731 12:16:21.060474    7391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:16:21.064456    7391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:16:21.067372    7391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:16:21.070497    7391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:16:21.073481    7391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:16:21.076679    7391 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:16:21.076726    7391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:16:21.081431    7391 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:16:21.088455    7391 start.go:297] selected driver: qemu2
	I0731 12:16:21.088459    7391 start.go:901] validating driver "qemu2" against &{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:16:21.088531    7391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:16:21.090973    7391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:16:21.091013    7391 cni.go:84] Creating CNI manager for ""
	I0731 12:16:21.091019    7391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:16:21.091067    7391 start.go:340] cluster config:
	{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:16:21.094755    7391 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:16:21.103455    7391 out.go:177] * Starting "functional-419000" primary control-plane node in "functional-419000" cluster
	I0731 12:16:21.107469    7391 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:16:21.107484    7391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:16:21.107499    7391 cache.go:56] Caching tarball of preloaded images
	I0731 12:16:21.107565    7391 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:16:21.107570    7391 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:16:21.107635    7391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/functional-419000/config.json ...
	I0731 12:16:21.108075    7391 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:16:21.108109    7391 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "functional-419000"
	I0731 12:16:21.108116    7391 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:16:21.108120    7391 fix.go:54] fixHost starting: 
	I0731 12:16:21.108239    7391 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
	W0731 12:16:21.108245    7391 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:16:21.116399    7391 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
	I0731 12:16:21.120408    7391 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:16:21.120439    7391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
	I0731 12:16:21.122648    7391 main.go:141] libmachine: STDOUT: 
	I0731 12:16:21.122665    7391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:16:21.122693    7391 fix.go:56] duration metric: took 14.573375ms for fixHost
	I0731 12:16:21.122703    7391 start.go:83] releasing machines lock for "functional-419000", held for 14.583708ms
	W0731 12:16:21.122710    7391 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:16:21.122741    7391 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:16:21.122746    7391 start.go:729] Will try again in 5 seconds ...
	I0731 12:16:26.124816    7391 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:16:26.125165    7391 start.go:364] duration metric: took 291.125µs to acquireMachinesLock for "functional-419000"
	I0731 12:16:26.125324    7391 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:16:26.125339    7391 fix.go:54] fixHost starting: 
	I0731 12:16:26.126004    7391 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
	W0731 12:16:26.126020    7391 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:16:26.134320    7391 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
	I0731 12:16:26.139409    7391 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:16:26.139537    7391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
	I0731 12:16:26.148253    7391 main.go:141] libmachine: STDOUT: 
	I0731 12:16:26.148303    7391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:16:26.148373    7391 fix.go:56] duration metric: took 23.033459ms for fixHost
	I0731 12:16:26.148381    7391 start.go:83] releasing machines lock for "functional-419000", held for 23.201792ms
	W0731 12:16:26.148594    7391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:16:26.155400    7391 out.go:177] 
	W0731 12:16:26.159469    7391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:16:26.159493    7391 out.go:239] * 
	W0731 12:16:26.162067    7391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:16:26.171253    7391 out.go:177] 
	
	
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-419000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | -p download-only-537000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-537000                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -o=json --download-only                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | -p download-only-207000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-207000                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -o=json --download-only                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | -p download-only-014000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-014000                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-537000                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-207000                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-014000                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | --download-only -p                                                       | binary-mirror-327000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | binary-mirror-327000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51044                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-327000                                                  | binary-mirror-327000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| addons  | enable dashboard -p                                                      | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | addons-728000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | addons-728000                                                            |                      |         |         |                     |                     |
| start   | -p addons-728000 --wait=true                                             | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-728000                                                         | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -p nospam-924000 -n=1 --memory=2250 --wait=false                         | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-924000                                                         | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:16 PDT |
| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | minikube-local-cache-test:functional-419000                              |                      |         |         |                     |                     |
| cache   | functional-419000 cache delete                                           | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | minikube-local-cache-test:functional-419000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
| ssh     | functional-419000 ssh sudo                                               | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-419000                                                        | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-419000 ssh                                                    | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-419000 cache reload                                           | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
| ssh     | functional-419000 ssh                                                    | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-419000 kubectl --                                             | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --context functional-419000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/31 12:16:21
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0731 12:16:21.019953    7391 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:21.020069    7391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:21.020071    7391 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:21.020073    7391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:21.020210    7391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:16:21.021202    7391 out.go:298] Setting JSON to false
I0731 12:16:21.037528    7391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4550,"bootTime":1722448831,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0731 12:16:21.037591    7391 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0731 12:16:21.044549    7391 out.go:177] * [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0731 12:16:21.052563    7391 out.go:177]   - MINIKUBE_LOCATION=19360
I0731 12:16:21.052552    7391 notify.go:220] Checking for updates...
I0731 12:16:21.060474    7391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
I0731 12:16:21.064456    7391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0731 12:16:21.067372    7391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0731 12:16:21.070497    7391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
I0731 12:16:21.073481    7391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0731 12:16:21.076679    7391 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:21.076726    7391 driver.go:392] Setting default libvirt URI to qemu:///system
I0731 12:16:21.081431    7391 out.go:177] * Using the qemu2 driver based on existing profile
I0731 12:16:21.088455    7391 start.go:297] selected driver: qemu2
I0731 12:16:21.088459    7391 start.go:901] validating driver "qemu2" against &{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:16:21.088531    7391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0731 12:16:21.090973    7391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0731 12:16:21.091013    7391 cni.go:84] Creating CNI manager for ""
I0731 12:16:21.091019    7391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0731 12:16:21.091067    7391 start.go:340] cluster config:
{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:16:21.094755    7391 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0731 12:16:21.103455    7391 out.go:177] * Starting "functional-419000" primary control-plane node in "functional-419000" cluster
I0731 12:16:21.107469    7391 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0731 12:16:21.107484    7391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0731 12:16:21.107499    7391 cache.go:56] Caching tarball of preloaded images
I0731 12:16:21.107565    7391 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0731 12:16:21.107570    7391 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0731 12:16:21.107635    7391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/functional-419000/config.json ...
I0731 12:16:21.108075    7391 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:16:21.108109    7391 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "functional-419000"
I0731 12:16:21.108116    7391 start.go:96] Skipping create...Using existing machine configuration
I0731 12:16:21.108120    7391 fix.go:54] fixHost starting: 
I0731 12:16:21.108239    7391 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
W0731 12:16:21.108245    7391 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:16:21.116399    7391 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
I0731 12:16:21.120408    7391 qemu.go:418] Using hvf for hardware acceleration
I0731 12:16:21.120439    7391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
I0731 12:16:21.122648    7391 main.go:141] libmachine: STDOUT: 
I0731 12:16:21.122665    7391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:16:21.122693    7391 fix.go:56] duration metric: took 14.573375ms for fixHost
I0731 12:16:21.122703    7391 start.go:83] releasing machines lock for "functional-419000", held for 14.583708ms
W0731 12:16:21.122710    7391 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:16:21.122741    7391 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:16:21.122746    7391 start.go:729] Will try again in 5 seconds ...
I0731 12:16:26.124816    7391 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:16:26.125165    7391 start.go:364] duration metric: took 291.125µs to acquireMachinesLock for "functional-419000"
I0731 12:16:26.125324    7391 start.go:96] Skipping create...Using existing machine configuration
I0731 12:16:26.125339    7391 fix.go:54] fixHost starting: 
I0731 12:16:26.126004    7391 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
W0731 12:16:26.126020    7391 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:16:26.134320    7391 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
I0731 12:16:26.139409    7391 qemu.go:418] Using hvf for hardware acceleration
I0731 12:16:26.139537    7391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
I0731 12:16:26.148253    7391 main.go:141] libmachine: STDOUT: 
I0731 12:16:26.148303    7391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:16:26.148373    7391 fix.go:56] duration metric: took 23.033459ms for fixHost
I0731 12:16:26.148381    7391 start.go:83] releasing machines lock for "functional-419000", held for 23.201792ms
W0731 12:16:26.148594    7391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:16:26.155400    7391 out.go:177] 
W0731 12:16:26.159469    7391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:16:26.159493    7391 out.go:239] * 
W0731 12:16:26.162067    7391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:16:26.171253    7391 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd779367177/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | -p download-only-537000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-537000                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -o=json --download-only                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | -p download-only-207000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-207000                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -o=json --download-only                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | -p download-only-014000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-014000                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-537000                                                  | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-207000                                                  | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| delete  | -p download-only-014000                                                  | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | --download-only -p                                                       | binary-mirror-327000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | binary-mirror-327000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51044                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-327000                                                  | binary-mirror-327000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| addons  | enable dashboard -p                                                      | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | addons-728000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | addons-728000                                                            |                      |         |         |                     |                     |
| start   | -p addons-728000 --wait=true                                             | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-728000                                                         | addons-728000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -p nospam-924000 -n=1 --memory=2250 --wait=false                         | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-924000 --log_dir                                                  | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-924000                                                         | nospam-924000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:16 PDT |
| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-419000 cache add                                              | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | minikube-local-cache-test:functional-419000                              |                      |         |         |                     |                     |
| cache   | functional-419000 cache delete                                           | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | minikube-local-cache-test:functional-419000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
| ssh     | functional-419000 ssh sudo                                               | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-419000                                                        | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-419000 ssh                                                    | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-419000 cache reload                                           | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
| ssh     | functional-419000 ssh                                                    | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT | 31 Jul 24 12:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-419000 kubectl --                                             | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --context functional-419000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-419000                                                     | functional-419000    | jenkins | v1.33.1 | 31 Jul 24 12:16 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/31 12:16:21
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0731 12:16:21.019953    7391 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:21.020069    7391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:21.020071    7391 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:21.020073    7391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:21.020210    7391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:16:21.021202    7391 out.go:298] Setting JSON to false
I0731 12:16:21.037528    7391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4550,"bootTime":1722448831,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0731 12:16:21.037591    7391 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0731 12:16:21.044549    7391 out.go:177] * [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0731 12:16:21.052563    7391 out.go:177]   - MINIKUBE_LOCATION=19360
I0731 12:16:21.052552    7391 notify.go:220] Checking for updates...
I0731 12:16:21.060474    7391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
I0731 12:16:21.064456    7391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0731 12:16:21.067372    7391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0731 12:16:21.070497    7391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
I0731 12:16:21.073481    7391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0731 12:16:21.076679    7391 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:21.076726    7391 driver.go:392] Setting default libvirt URI to qemu:///system
I0731 12:16:21.081431    7391 out.go:177] * Using the qemu2 driver based on existing profile
I0731 12:16:21.088455    7391 start.go:297] selected driver: qemu2
I0731 12:16:21.088459    7391 start.go:901] validating driver "qemu2" against &{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:16:21.088531    7391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0731 12:16:21.090973    7391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0731 12:16:21.091013    7391 cni.go:84] Creating CNI manager for ""
I0731 12:16:21.091019    7391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0731 12:16:21.091067    7391 start.go:340] cluster config:
{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:16:21.094755    7391 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0731 12:16:21.103455    7391 out.go:177] * Starting "functional-419000" primary control-plane node in "functional-419000" cluster
I0731 12:16:21.107469    7391 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0731 12:16:21.107484    7391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0731 12:16:21.107499    7391 cache.go:56] Caching tarball of preloaded images
I0731 12:16:21.107565    7391 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0731 12:16:21.107570    7391 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0731 12:16:21.107635    7391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/functional-419000/config.json ...
I0731 12:16:21.108075    7391 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:16:21.108109    7391 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "functional-419000"
I0731 12:16:21.108116    7391 start.go:96] Skipping create...Using existing machine configuration
I0731 12:16:21.108120    7391 fix.go:54] fixHost starting: 
I0731 12:16:21.108239    7391 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
W0731 12:16:21.108245    7391 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:16:21.116399    7391 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
I0731 12:16:21.120408    7391 qemu.go:418] Using hvf for hardware acceleration
I0731 12:16:21.120439    7391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
I0731 12:16:21.122648    7391 main.go:141] libmachine: STDOUT: 
I0731 12:16:21.122665    7391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:16:21.122693    7391 fix.go:56] duration metric: took 14.573375ms for fixHost
I0731 12:16:21.122703    7391 start.go:83] releasing machines lock for "functional-419000", held for 14.583708ms
W0731 12:16:21.122710    7391 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:16:21.122741    7391 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:16:21.122746    7391 start.go:729] Will try again in 5 seconds ...
I0731 12:16:26.124816    7391 start.go:360] acquireMachinesLock for functional-419000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:16:26.125165    7391 start.go:364] duration metric: took 291.125µs to acquireMachinesLock for "functional-419000"
I0731 12:16:26.125324    7391 start.go:96] Skipping create...Using existing machine configuration
I0731 12:16:26.125339    7391 fix.go:54] fixHost starting: 
I0731 12:16:26.126004    7391 fix.go:112] recreateIfNeeded on functional-419000: state=Stopped err=<nil>
W0731 12:16:26.126020    7391 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:16:26.134320    7391 out.go:177] * Restarting existing qemu2 VM for "functional-419000" ...
I0731 12:16:26.139409    7391 qemu.go:418] Using hvf for hardware acceleration
I0731 12:16:26.139537    7391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:a5:8c:1a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/functional-419000/disk.qcow2
I0731 12:16:26.148253    7391 main.go:141] libmachine: STDOUT: 
I0731 12:16:26.148303    7391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:16:26.148373    7391 fix.go:56] duration metric: took 23.033459ms for fixHost
I0731 12:16:26.148381    7391 start.go:83] releasing machines lock for "functional-419000", held for 23.201792ms
W0731 12:16:26.148594    7391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-419000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:16:26.155400    7391 out.go:177] 
W0731 12:16:26.159469    7391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:16:26.159493    7391 out.go:239] * 
W0731 12:16:26.162067    7391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:16:26.171253    7391 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-419000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-419000 apply -f testdata/invalidsvc.yaml: exit status 1 (29.279958ms)

                                                
                                                
** stderr ** 
	error: context "functional-419000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-419000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-419000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-419000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-419000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-419000 --alsologtostderr -v=1] stderr:
I0731 12:17:06.622985    7695 out.go:291] Setting OutFile to fd 1 ...
I0731 12:17:06.623361    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:06.623364    7695 out.go:304] Setting ErrFile to fd 2...
I0731 12:17:06.623366    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:06.623539    7695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:17:06.623762    7695 mustload.go:65] Loading cluster: functional-419000
I0731 12:17:06.623953    7695 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:17:06.628414    7695 out.go:177] * The control-plane node functional-419000 host is not running: state=Stopped
I0731 12:17:06.632234    7695 out.go:177]   To start a cluster, run: "minikube start -p functional-419000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (40.663083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 status: exit status 7 (28.916291ms)

                                                
                                                
-- stdout --
	functional-419000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-419000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (28.780709ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-419000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 status -o json: exit status 7 (28.946583ms)

                                                
                                                
-- stdout --
	{"Name":"functional-419000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-419000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (28.471375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-419000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-419000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.11ms)

                                                
                                                
** stderr ** 
	error: context "functional-419000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-419000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-419000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-419000 describe po hello-node-connect: exit status 1 (25.924125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-419000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-419000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-419000 logs -l app=hello-node-connect: exit status 1 (26.867417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-419000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-419000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-419000 describe svc hello-node-connect: exit status 1 (26.472167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-419000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (28.744625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-419000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (29.590791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "echo hello": exit status 83 (37.930458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n"*. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "cat /etc/hostname": exit status 83 (48.4455ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-419000"- but got *"* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n"*. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (31.006541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.543125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-419000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.944042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-419000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-419000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cp functional-419000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd69067738/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 cp functional-419000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd69067738/001/cp-test.txt: exit status 83 (49.512542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-419000 cp functional-419000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd69067738/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.96725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd69067738/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (44.873625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-419000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (40.969709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-419000 ssh -n functional-419000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-419000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-419000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7068/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/test/nested/copy/7068/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/test/nested/copy/7068/hosts": exit status 83 (46.0645ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/test/nested/copy/7068/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-419000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-419000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (32.82925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7068.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/7068.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/7068.pem": exit status 83 (42.0105ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7068.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo cat /etc/ssl/certs/7068.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7068.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-419000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-419000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7068.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /usr/share/ca-certificates/7068.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /usr/share/ca-certificates/7068.pem": exit status 83 (40.648167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7068.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo cat /usr/share/ca-certificates/7068.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7068.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-419000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-419000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (45.678167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-419000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-419000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/70682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/70682.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/70682.pem": exit status 83 (40.603958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/70682.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo cat /etc/ssl/certs/70682.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/70682.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-419000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-419000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/70682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /usr/share/ca-certificates/70682.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /usr/share/ca-certificates/70682.pem": exit status 83 (38.861208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/70682.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo cat /usr/share/ca-certificates/70682.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/70682.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-419000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-419000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.686667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-419000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-419000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (29.483458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-419000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-419000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.929916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-419000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-419000 -n functional-419000: exit status 7 (30.387458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-419000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo systemctl is-active crio": exit status 83 (45.593416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 version -o=json --components: exit status 83 (39.90225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-419000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-419000 image ls --format short --alsologtostderr:
I0731 12:17:07.024020    7710 out.go:291] Setting OutFile to fd 1 ...
I0731 12:17:07.024188    7710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.024191    7710 out.go:304] Setting ErrFile to fd 2...
I0731 12:17:07.024193    7710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.024324    7710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:17:07.024760    7710 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:17:07.024823    7710 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-419000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-419000 image ls --format table --alsologtostderr:
I0731 12:17:07.242409    7722 out.go:291] Setting OutFile to fd 1 ...
I0731 12:17:07.242563    7722 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.242566    7722 out.go:304] Setting ErrFile to fd 2...
I0731 12:17:07.242568    7722 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.242697    7722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:17:07.243114    7722 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:17:07.243175    7722 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-419000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-419000 image ls --format json --alsologtostderr:
I0731 12:17:07.206537    7720 out.go:291] Setting OutFile to fd 1 ...
I0731 12:17:07.206698    7720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.206702    7720 out.go:304] Setting ErrFile to fd 2...
I0731 12:17:07.206704    7720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.206836    7720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:17:07.207255    7720 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:17:07.207314    7720 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-419000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-419000 image ls --format yaml --alsologtostderr:
I0731 12:17:07.058856    7712 out.go:291] Setting OutFile to fd 1 ...
I0731 12:17:07.058994    7712 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.058998    7712 out.go:304] Setting ErrFile to fd 2...
I0731 12:17:07.059000    7712 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.059135    7712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:17:07.059543    7712 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:17:07.059611    7712 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh pgrep buildkitd: exit status 83 (40.978833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image build -t localhost/my-image:functional-419000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-419000 image build -t localhost/my-image:functional-419000 testdata/build --alsologtostderr:
I0731 12:17:07.135409    7716 out.go:291] Setting OutFile to fd 1 ...
I0731 12:17:07.135860    7716 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.135864    7716 out.go:304] Setting ErrFile to fd 2...
I0731 12:17:07.135866    7716 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:17:07.136036    7716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:17:07.136467    7716 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:17:07.136931    7716 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:17:07.137167    7716 build_images.go:133] succeeded building to: 
I0731 12:17:07.137170    7716 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls
functional_test.go:442: expected "localhost/my-image:functional-419000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-419000 docker-env) && out/minikube-darwin-arm64 status -p functional-419000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-419000 docker-env) && out/minikube-darwin-arm64 status -p functional-419000": exit status 1 (43.835583ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2: exit status 83 (41.886667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:17:06.896155    7704 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:17:06.896843    7704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.896846    7704 out.go:304] Setting ErrFile to fd 2...
	I0731 12:17:06.896849    7704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.896996    7704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:17:06.897248    7704 mustload.go:65] Loading cluster: functional-419000
	I0731 12:17:06.897430    7704 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:17:06.900868    7704 out.go:177] * The control-plane node functional-419000 host is not running: state=Stopped
	I0731 12:17:06.904845    7704 out.go:177]   To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2: exit status 83 (41.568084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:17:06.981815    7708 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:17:06.981964    7708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.981967    7708 out.go:304] Setting ErrFile to fd 2...
	I0731 12:17:06.981970    7708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.982102    7708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:17:06.982332    7708 mustload.go:65] Loading cluster: functional-419000
	I0731 12:17:06.982506    7708 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:17:06.986677    7708 out.go:177] * The control-plane node functional-419000 host is not running: state=Stopped
	I0731 12:17:06.990880    7708 out.go:177]   To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2: exit status 83 (43.459791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:17:06.939160    7706 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:17:06.939340    7706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.939343    7706 out.go:304] Setting ErrFile to fd 2...
	I0731 12:17:06.939346    7706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.939471    7706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:17:06.939699    7706 mustload.go:65] Loading cluster: functional-419000
	I0731 12:17:06.939883    7706 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:17:06.944831    7706 out.go:177] * The control-plane node functional-419000 host is not running: state=Stopped
	I0731 12:17:06.948818    7706 out.go:177]   To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-419000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-419000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-419000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.043042ms)

                                                
                                                
** stderr ** 
	error: context "functional-419000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-419000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 service list: exit status 83 (42.577791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-419000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 service list -o json: exit status 83 (41.881125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-419000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 service --namespace=default --https --url hello-node: exit status 83 (42.8265ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-419000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 service hello-node --url --format={{.IP}}: exit status 83 (40.820375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-419000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 service hello-node --url: exit status 83 (41.796ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-419000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test.go:1565: failed to parse "* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"": parse "* The control-plane node functional-419000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-419000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0731 12:16:27.918161    7508 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:27.918294    7508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:27.918298    7508 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:27.918301    7508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:27.918451    7508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:16:27.918649    7508 mustload.go:65] Loading cluster: functional-419000
I0731 12:16:27.918832    7508 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:27.923524    7508 out.go:177] * The control-plane node functional-419000 host is not running: state=Stopped
I0731 12:16:27.936481    7508 out.go:177]   To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
stdout: * The control-plane node functional-419000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-419000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7509: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-419000": client config: context "functional-419000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-419000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-419000 get svc nginx-svc: exit status 1 (69.676208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-419000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-419000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image load --daemon docker.io/kicbase/echo-server:functional-419000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-419000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image load --daemon docker.io/kicbase/echo-server:functional-419000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-419000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-419000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image load --daemon docker.io/kicbase/echo-server:functional-419000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-419000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image save docker.io/kicbase/echo-server:functional-419000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-419000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036101041s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-836000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-836000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.771435958s)

                                                
                                                
-- stdout --
	* [ha-836000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-836000" primary control-plane node in "ha-836000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-836000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:10.758189    7757 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:10.758301    7757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:10.758304    7757 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:10.758306    7757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:10.758429    7757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:19:10.759528    7757 out.go:298] Setting JSON to false
	I0731 12:19:10.776112    7757 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4719,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:19:10.776189    7757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:19:10.781867    7757 out.go:177] * [ha-836000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:19:10.788903    7757 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:19:10.788941    7757 notify.go:220] Checking for updates...
	I0731 12:19:10.796815    7757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:19:10.798173    7757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:19:10.800853    7757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:19:10.803868    7757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:19:10.806860    7757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:19:10.810044    7757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:19:10.813794    7757 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:19:10.820809    7757 start.go:297] selected driver: qemu2
	I0731 12:19:10.820816    7757 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:19:10.820823    7757 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:19:10.823079    7757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:19:10.825810    7757 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:19:10.829008    7757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:19:10.829060    7757 cni.go:84] Creating CNI manager for ""
	I0731 12:19:10.829064    7757 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 12:19:10.829069    7757 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:19:10.829101    7757 start.go:340] cluster config:
	{Name:ha-836000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:19:10.832965    7757 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:19:10.841821    7757 out.go:177] * Starting "ha-836000" primary control-plane node in "ha-836000" cluster
	I0731 12:19:10.845814    7757 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:19:10.845832    7757 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:19:10.845847    7757 cache.go:56] Caching tarball of preloaded images
	I0731 12:19:10.845922    7757 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:19:10.845927    7757 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:19:10.846125    7757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/ha-836000/config.json ...
	I0731 12:19:10.846139    7757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/ha-836000/config.json: {Name:mk1884a50331edc16960efd1ff295275924b9439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:19:10.846495    7757 start.go:360] acquireMachinesLock for ha-836000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:19:10.846529    7757 start.go:364] duration metric: took 27.834µs to acquireMachinesLock for "ha-836000"
	I0731 12:19:10.846539    7757 start.go:93] Provisioning new machine with config: &{Name:ha-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:19:10.846571    7757 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:19:10.851911    7757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:19:10.869241    7757 start.go:159] libmachine.API.Create for "ha-836000" (driver="qemu2")
	I0731 12:19:10.869270    7757 client.go:168] LocalClient.Create starting
	I0731 12:19:10.869330    7757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:19:10.869361    7757 main.go:141] libmachine: Decoding PEM data...
	I0731 12:19:10.869371    7757 main.go:141] libmachine: Parsing certificate...
	I0731 12:19:10.869410    7757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:19:10.869435    7757 main.go:141] libmachine: Decoding PEM data...
	I0731 12:19:10.869443    7757 main.go:141] libmachine: Parsing certificate...
	I0731 12:19:10.869866    7757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:19:11.023472    7757 main.go:141] libmachine: Creating SSH key...
	I0731 12:19:11.083391    7757 main.go:141] libmachine: Creating Disk image...
	I0731 12:19:11.083397    7757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:19:11.083625    7757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:19:11.092678    7757 main.go:141] libmachine: STDOUT: 
	I0731 12:19:11.092694    7757 main.go:141] libmachine: STDERR: 
	I0731 12:19:11.092736    7757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2 +20000M
	I0731 12:19:11.100491    7757 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:19:11.100505    7757 main.go:141] libmachine: STDERR: 
	I0731 12:19:11.100521    7757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:19:11.100525    7757 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:19:11.100537    7757 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:19:11.100567    7757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:09:2f:dd:39:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:19:11.102166    7757 main.go:141] libmachine: STDOUT: 
	I0731 12:19:11.102180    7757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:19:11.102197    7757 client.go:171] duration metric: took 232.926042ms to LocalClient.Create
	I0731 12:19:13.104335    7757 start.go:128] duration metric: took 2.257783958s to createHost
	I0731 12:19:13.104385    7757 start.go:83] releasing machines lock for "ha-836000", held for 2.25788275s
	W0731 12:19:13.104484    7757 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:19:13.115539    7757 out.go:177] * Deleting "ha-836000" in qemu2 ...
	W0731 12:19:13.151220    7757 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:19:13.151243    7757 start.go:729] Will try again in 5 seconds ...
	I0731 12:19:18.153364    7757 start.go:360] acquireMachinesLock for ha-836000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:19:18.153836    7757 start.go:364] duration metric: took 379.667µs to acquireMachinesLock for "ha-836000"
	I0731 12:19:18.153959    7757 start.go:93] Provisioning new machine with config: &{Name:ha-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:19:18.154241    7757 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:19:18.167860    7757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:19:18.218699    7757 start.go:159] libmachine.API.Create for "ha-836000" (driver="qemu2")
	I0731 12:19:18.218746    7757 client.go:168] LocalClient.Create starting
	I0731 12:19:18.218850    7757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:19:18.218906    7757 main.go:141] libmachine: Decoding PEM data...
	I0731 12:19:18.218925    7757 main.go:141] libmachine: Parsing certificate...
	I0731 12:19:18.218998    7757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:19:18.219043    7757 main.go:141] libmachine: Decoding PEM data...
	I0731 12:19:18.219059    7757 main.go:141] libmachine: Parsing certificate...
	I0731 12:19:18.219847    7757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:19:18.379758    7757 main.go:141] libmachine: Creating SSH key...
	I0731 12:19:18.433773    7757 main.go:141] libmachine: Creating Disk image...
	I0731 12:19:18.433778    7757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:19:18.433990    7757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:19:18.443012    7757 main.go:141] libmachine: STDOUT: 
	I0731 12:19:18.443028    7757 main.go:141] libmachine: STDERR: 
	I0731 12:19:18.443074    7757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2 +20000M
	I0731 12:19:18.450986    7757 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:19:18.450999    7757 main.go:141] libmachine: STDERR: 
	I0731 12:19:18.451022    7757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:19:18.451027    7757 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:19:18.451034    7757 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:19:18.451062    7757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:d2:65:ae:6a:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:19:18.452665    7757 main.go:141] libmachine: STDOUT: 
	I0731 12:19:18.452678    7757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:19:18.452689    7757 client.go:171] duration metric: took 233.940541ms to LocalClient.Create
	I0731 12:19:20.454828    7757 start.go:128] duration metric: took 2.300596167s to createHost
	I0731 12:19:20.454891    7757 start.go:83] releasing machines lock for "ha-836000", held for 2.301067666s
	W0731 12:19:20.455324    7757 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-836000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-836000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:19:20.470055    7757 out.go:177] 
	W0731 12:19:20.474101    7757 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:19:20.474131    7757 out.go:239] * 
	* 
	W0731 12:19:20.476592    7757 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:19:20.486982    7757 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-836000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (66.437084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (96.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.576459ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-836000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- rollout status deployment/busybox: exit status 1 (56.979791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.4455ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.568834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.342166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.484708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.474958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.636916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.986083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.976542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.018042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.33825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.172459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.519292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.393917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.922333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.107791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (30.469917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (96.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-836000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.816166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-836000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (30.795916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-836000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-836000 -v=7 --alsologtostderr: exit status 83 (42.347333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-836000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-836000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:56.961632    7849 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:56.962524    7849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:56.962627    7849 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:56.962631    7849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:56.962799    7849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:20:56.963043    7849 mustload.go:65] Loading cluster: ha-836000
	I0731 12:20:56.963230    7849 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:56.967991    7849 out.go:177] * The control-plane node ha-836000 host is not running: state=Stopped
	I0731 12:20:56.970763    7849 out.go:177]   To start a cluster, run: "minikube start -p ha-836000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-836000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.514833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-836000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-836000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.768417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-836000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-836000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-836000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.246083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-836000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-836000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.239834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status --output json -v=7 --alsologtostderr: exit status 7 (30.335792ms)

                                                
                                                
-- stdout --
	{"Name":"ha-836000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:57.164607    7861 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:57.164763    7861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.164766    7861 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:57.164769    7861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.164898    7861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:20:57.165003    7861 out.go:298] Setting JSON to true
	I0731 12:20:57.165014    7861 mustload.go:65] Loading cluster: ha-836000
	I0731 12:20:57.165076    7861 notify.go:220] Checking for updates...
	I0731 12:20:57.165201    7861 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:57.165208    7861 status.go:255] checking status of ha-836000 ...
	I0731 12:20:57.165398    7861 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:20:57.165401    7861 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:57.165404    7861 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-836000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (30.191417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.133125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:57.225218    7865 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:57.225790    7865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.225794    7865 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:57.225796    7865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.225969    7865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:20:57.226210    7865 mustload.go:65] Loading cluster: ha-836000
	I0731 12:20:57.226407    7865 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:57.229501    7865 out.go:177] 
	W0731 12:20:57.232366    7865 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0731 12:20:57.232371    7865 out.go:239] * 
	* 
	W0731 12:20:57.234342    7865 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:20:57.238392    7865 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-836000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (29.588625ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:57.271236    7867 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:57.271405    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.271408    7867 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:57.271411    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.271550    7867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:20:57.271677    7867 out.go:298] Setting JSON to false
	I0731 12:20:57.271685    7867 mustload.go:65] Loading cluster: ha-836000
	I0731 12:20:57.271751    7867 notify.go:220] Checking for updates...
	I0731 12:20:57.271887    7867 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:57.271895    7867 status.go:255] checking status of ha-836000 ...
	I0731 12:20:57.272116    7867 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:20:57.272121    7867 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:57.272123    7867 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr": ha-836000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr": ha-836000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr": ha-836000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr": ha-836000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (30.266542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-836000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.442417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.194708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:57.408463    7876 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:57.408874    7876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.408878    7876 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:57.408880    7876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.409062    7876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:20:57.409275    7876 mustload.go:65] Loading cluster: ha-836000
	I0731 12:20:57.409470    7876 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:57.412396    7876 out.go:177] 
	W0731 12:20:57.416333    7876 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0731 12:20:57.416338    7876 out.go:239] * 
	* 
	W0731 12:20:57.418252    7876 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:20:57.422175    7876 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0731 12:20:57.408463    7876 out.go:291] Setting OutFile to fd 1 ...
I0731 12:20:57.408874    7876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:20:57.408878    7876 out.go:304] Setting ErrFile to fd 2...
I0731 12:20:57.408880    7876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:20:57.409062    7876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:20:57.409275    7876 mustload.go:65] Loading cluster: ha-836000
I0731 12:20:57.409470    7876 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:20:57.412396    7876 out.go:177] 
W0731 12:20:57.416333    7876 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0731 12:20:57.416338    7876 out.go:239] * 
* 
W0731 12:20:57.418252    7876 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:20:57.422175    7876 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-836000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (30.864666ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:57.456572    7878 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:57.456725    7878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.456729    7878 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:57.456731    7878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:57.456867    7878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:20:57.456969    7878 out.go:298] Setting JSON to false
	I0731 12:20:57.456978    7878 mustload.go:65] Loading cluster: ha-836000
	I0731 12:20:57.457032    7878 notify.go:220] Checking for updates...
	I0731 12:20:57.457158    7878 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:57.457170    7878 status.go:255] checking status of ha-836000 ...
	I0731 12:20:57.457395    7878 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:20:57.457399    7878 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:57.457401    7878 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (72.115334ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:58.916937    7880 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:58.917137    7880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:58.917142    7880 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:58.917145    7880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:58.917335    7880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:20:58.917484    7880 out.go:298] Setting JSON to false
	I0731 12:20:58.917496    7880 mustload.go:65] Loading cluster: ha-836000
	I0731 12:20:58.917533    7880 notify.go:220] Checking for updates...
	I0731 12:20:58.917781    7880 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:58.917789    7880 status.go:255] checking status of ha-836000 ...
	I0731 12:20:58.918094    7880 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:20:58.918099    7880 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:58.918102    7880 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (67.827834ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:01.017736    7882 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:01.017950    7882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:01.017955    7882 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:01.017959    7882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:01.018146    7882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:01.018316    7882 out.go:298] Setting JSON to false
	I0731 12:21:01.018328    7882 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:01.018379    7882 notify.go:220] Checking for updates...
	I0731 12:21:01.018600    7882 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:01.018608    7882 status.go:255] checking status of ha-836000 ...
	I0731 12:21:01.018924    7882 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:01.018929    7882 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:01.018932    7882 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (72.707958ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:03.289404    7887 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:03.289615    7887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:03.289620    7887 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:03.289623    7887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:03.289807    7887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:03.289975    7887 out.go:298] Setting JSON to false
	I0731 12:21:03.289986    7887 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:03.290037    7887 notify.go:220] Checking for updates...
	I0731 12:21:03.290246    7887 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:03.290254    7887 status.go:255] checking status of ha-836000 ...
	I0731 12:21:03.290516    7887 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:03.290521    7887 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:03.290524    7887 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (48.705084ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:07.369533    7889 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:07.369711    7889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:07.369714    7889 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:07.369716    7889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:07.369875    7889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:07.370014    7889 out.go:298] Setting JSON to false
	I0731 12:21:07.370028    7889 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:07.370065    7889 notify.go:220] Checking for updates...
	I0731 12:21:07.370213    7889 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:07.370225    7889 status.go:255] checking status of ha-836000 ...
	I0731 12:21:07.370428    7889 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:07.370433    7889 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:07.370435    7889 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (72.897042ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:12.796487    7891 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:12.796697    7891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:12.796701    7891 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:12.796704    7891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:12.796870    7891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:12.797041    7891 out.go:298] Setting JSON to false
	I0731 12:21:12.797056    7891 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:12.797092    7891 notify.go:220] Checking for updates...
	I0731 12:21:12.797330    7891 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:12.797338    7891 status.go:255] checking status of ha-836000 ...
	I0731 12:21:12.797628    7891 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:12.797633    7891 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:12.797636    7891 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (77.177666ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:20.710271    7893 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:20.710505    7893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:20.710510    7893 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:20.710513    7893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:20.710694    7893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:20.710897    7893 out.go:298] Setting JSON to false
	I0731 12:21:20.710910    7893 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:20.710954    7893 notify.go:220] Checking for updates...
	I0731 12:21:20.711192    7893 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:20.711200    7893 status.go:255] checking status of ha-836000 ...
	I0731 12:21:20.711472    7893 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:20.711477    7893 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:20.711480    7893 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (71.671208ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:29.869504    7895 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:29.869714    7895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:29.869718    7895 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:29.869721    7895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:29.869884    7895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:29.870041    7895 out.go:298] Setting JSON to false
	I0731 12:21:29.870053    7895 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:29.870096    7895 notify.go:220] Checking for updates...
	I0731 12:21:29.870299    7895 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:29.870306    7895 status.go:255] checking status of ha-836000 ...
	I0731 12:21:29.870594    7895 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:29.870599    7895 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:29.870603    7895 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (74.1115ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:45.055187    7897 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:45.055383    7897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:45.055388    7897 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:45.055391    7897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:45.055573    7897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:45.055733    7897 out.go:298] Setting JSON to false
	I0731 12:21:45.055745    7897 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:45.055787    7897 notify.go:220] Checking for updates...
	I0731 12:21:45.056025    7897 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:45.056035    7897 status.go:255] checking status of ha-836000 ...
	I0731 12:21:45.056308    7897 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:45.056313    7897 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:45.056316    7897 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (33.032833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (47.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-836000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-836000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (31.822917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-836000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-836000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-836000 -v=7 --alsologtostderr: (4.073597291s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-836000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-836000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.224883625s)

                                                
                                                
-- stdout --
	* [ha-836000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-836000" primary control-plane node in "ha-836000" cluster
	* Restarting existing qemu2 VM for "ha-836000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-836000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:49.338856    7928 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:49.339025    7928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:49.339030    7928 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:49.339033    7928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:49.339200    7928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:49.340404    7928 out.go:298] Setting JSON to false
	I0731 12:21:49.359819    7928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4878,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:21:49.359885    7928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:21:49.364439    7928 out.go:177] * [ha-836000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:21:49.372342    7928 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:21:49.372380    7928 notify.go:220] Checking for updates...
	I0731 12:21:49.380392    7928 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:21:49.383296    7928 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:21:49.387381    7928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:21:49.390383    7928 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:21:49.393296    7928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:21:49.396674    7928 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:49.396732    7928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:21:49.400338    7928 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:21:49.407404    7928 start.go:297] selected driver: qemu2
	I0731 12:21:49.407412    7928 start.go:901] validating driver "qemu2" against &{Name:ha-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:21:49.407477    7928 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:21:49.409942    7928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:21:49.409990    7928 cni.go:84] Creating CNI manager for ""
	I0731 12:21:49.409995    7928 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:21:49.410039    7928 start.go:340] cluster config:
	{Name:ha-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-836000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:21:49.413625    7928 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:21:49.421275    7928 out.go:177] * Starting "ha-836000" primary control-plane node in "ha-836000" cluster
	I0731 12:21:49.425339    7928 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:21:49.425355    7928 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:21:49.425369    7928 cache.go:56] Caching tarball of preloaded images
	I0731 12:21:49.425431    7928 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:21:49.425438    7928 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:21:49.425503    7928 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/ha-836000/config.json ...
	I0731 12:21:49.425953    7928 start.go:360] acquireMachinesLock for ha-836000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:21:49.425988    7928 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "ha-836000"
	I0731 12:21:49.425996    7928 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:21:49.426001    7928 fix.go:54] fixHost starting: 
	I0731 12:21:49.426118    7928 fix.go:112] recreateIfNeeded on ha-836000: state=Stopped err=<nil>
	W0731 12:21:49.426127    7928 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:21:49.434373    7928 out.go:177] * Restarting existing qemu2 VM for "ha-836000" ...
	I0731 12:21:49.437357    7928 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:21:49.437408    7928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:d2:65:ae:6a:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:21:49.439593    7928 main.go:141] libmachine: STDOUT: 
	I0731 12:21:49.439616    7928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:21:49.439646    7928 fix.go:56] duration metric: took 13.645667ms for fixHost
	I0731 12:21:49.439650    7928 start.go:83] releasing machines lock for "ha-836000", held for 13.658208ms
	W0731 12:21:49.439658    7928 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:21:49.439697    7928 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:21:49.439702    7928 start.go:729] Will try again in 5 seconds ...
	I0731 12:21:54.441820    7928 start.go:360] acquireMachinesLock for ha-836000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:21:54.442198    7928 start.go:364] duration metric: took 287.709µs to acquireMachinesLock for "ha-836000"
	I0731 12:21:54.442323    7928 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:21:54.442341    7928 fix.go:54] fixHost starting: 
	I0731 12:21:54.443019    7928 fix.go:112] recreateIfNeeded on ha-836000: state=Stopped err=<nil>
	W0731 12:21:54.443044    7928 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:21:54.447634    7928 out.go:177] * Restarting existing qemu2 VM for "ha-836000" ...
	I0731 12:21:54.454543    7928 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:21:54.454793    7928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:d2:65:ae:6a:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:21:54.463787    7928 main.go:141] libmachine: STDOUT: 
	I0731 12:21:54.463862    7928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:21:54.463954    7928 fix.go:56] duration metric: took 21.613416ms for fixHost
	I0731 12:21:54.463971    7928 start.go:83] releasing machines lock for "ha-836000", held for 21.749ms
	W0731 12:21:54.464184    7928 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-836000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-836000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:21:54.470562    7928 out.go:177] 
	W0731 12:21:54.474546    7928 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:21:54.474583    7928 out.go:239] * 
	* 
	W0731 12:21:54.477169    7928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:21:54.485508    7928 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-836000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-836000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (33.247583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.096375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-836000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-836000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:54.630126    7940 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:54.630530    7940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:54.630534    7940 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:54.630536    7940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:54.630714    7940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:54.630931    7940 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:54.631123    7940 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:54.633028    7940 out.go:177] * The control-plane node ha-836000 host is not running: state=Stopped
	I0731 12:21:54.636260    7940 out.go:177]   To start a cluster, run: "minikube start -p ha-836000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-836000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (29.913667ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:54.668397    7942 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:54.668540    7942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:54.668544    7942 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:54.668546    7942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:54.668695    7942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:54.668817    7942 out.go:298] Setting JSON to false
	I0731 12:21:54.668826    7942 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:54.668882    7942 notify.go:220] Checking for updates...
	I0731 12:21:54.669041    7942 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:54.669048    7942 status.go:255] checking status of ha-836000 ...
	I0731 12:21:54.669255    7942 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:54.669259    7942 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:54.669261    7942 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.919917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-836000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (30.279125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-836000 stop -v=7 --alsologtostderr: (1.810632375s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr: exit status 7 (69.041375ms)

                                                
                                                
-- stdout --
	ha-836000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:56.654541    7961 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:56.654733    7961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:56.654737    7961 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:56.654740    7961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:56.654902    7961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:56.655055    7961 out.go:298] Setting JSON to false
	I0731 12:21:56.655066    7961 mustload.go:65] Loading cluster: ha-836000
	I0731 12:21:56.655095    7961 notify.go:220] Checking for updates...
	I0731 12:21:56.655318    7961 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:56.655329    7961 status.go:255] checking status of ha-836000 ...
	I0731 12:21:56.655607    7961 status.go:330] ha-836000 host status = "Stopped" (err=<nil>)
	I0731 12:21:56.655612    7961 status.go:343] host is not running, skipping remaining checks
	I0731 12:21:56.655615    7961 status.go:257] ha-836000 status: &{Name:ha-836000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr": ha-836000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr": ha-836000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-836000 status -v=7 --alsologtostderr": ha-836000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (32.51725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-836000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-836000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182492417s)

                                                
                                                
-- stdout --
	* [ha-836000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-836000" primary control-plane node in "ha-836000" cluster
	* Restarting existing qemu2 VM for "ha-836000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-836000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:56.717327    7965 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:56.717463    7965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:56.717466    7965 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:56.717469    7965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:56.717592    7965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:21:56.718588    7965 out.go:298] Setting JSON to false
	I0731 12:21:56.734623    7965 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4885,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:21:56.734685    7965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:21:56.739670    7965 out.go:177] * [ha-836000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:21:56.742490    7965 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:21:56.742552    7965 notify.go:220] Checking for updates...
	I0731 12:21:56.749538    7965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:21:56.753468    7965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:21:56.756544    7965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:21:56.759621    7965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:21:56.762557    7965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:21:56.765800    7965 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:21:56.766066    7965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:21:56.769578    7965 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:21:56.776531    7965 start.go:297] selected driver: qemu2
	I0731 12:21:56.776540    7965 start.go:901] validating driver "qemu2" against &{Name:ha-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:21:56.776609    7965 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:21:56.778798    7965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:21:56.778839    7965 cni.go:84] Creating CNI manager for ""
	I0731 12:21:56.778845    7965 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:21:56.778892    7965 start.go:340] cluster config:
	{Name:ha-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-836000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:21:56.782353    7965 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:21:56.790525    7965 out.go:177] * Starting "ha-836000" primary control-plane node in "ha-836000" cluster
	I0731 12:21:56.794512    7965 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:21:56.794525    7965 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:21:56.794534    7965 cache.go:56] Caching tarball of preloaded images
	I0731 12:21:56.794580    7965 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:21:56.794585    7965 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:21:56.794634    7965 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/ha-836000/config.json ...
	I0731 12:21:56.795054    7965 start.go:360] acquireMachinesLock for ha-836000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:21:56.795082    7965 start.go:364] duration metric: took 21.583µs to acquireMachinesLock for "ha-836000"
	I0731 12:21:56.795090    7965 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:21:56.795097    7965 fix.go:54] fixHost starting: 
	I0731 12:21:56.795215    7965 fix.go:112] recreateIfNeeded on ha-836000: state=Stopped err=<nil>
	W0731 12:21:56.795223    7965 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:21:56.803533    7965 out.go:177] * Restarting existing qemu2 VM for "ha-836000" ...
	I0731 12:21:56.807349    7965 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:21:56.807384    7965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:d2:65:ae:6a:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:21:56.809443    7965 main.go:141] libmachine: STDOUT: 
	I0731 12:21:56.809465    7965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:21:56.809493    7965 fix.go:56] duration metric: took 14.396375ms for fixHost
	I0731 12:21:56.809498    7965 start.go:83] releasing machines lock for "ha-836000", held for 14.41275ms
	W0731 12:21:56.809506    7965 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:21:56.809539    7965 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:21:56.809544    7965 start.go:729] Will try again in 5 seconds ...
	I0731 12:22:01.811676    7965 start.go:360] acquireMachinesLock for ha-836000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:22:01.812287    7965 start.go:364] duration metric: took 472.041µs to acquireMachinesLock for "ha-836000"
	I0731 12:22:01.812460    7965 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:22:01.812484    7965 fix.go:54] fixHost starting: 
	I0731 12:22:01.813209    7965 fix.go:112] recreateIfNeeded on ha-836000: state=Stopped err=<nil>
	W0731 12:22:01.813235    7965 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:22:01.817678    7965 out.go:177] * Restarting existing qemu2 VM for "ha-836000" ...
	I0731 12:22:01.824720    7965 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:22:01.824974    7965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:d2:65:ae:6a:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/ha-836000/disk.qcow2
	I0731 12:22:01.834623    7965 main.go:141] libmachine: STDOUT: 
	I0731 12:22:01.834733    7965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:22:01.834831    7965 fix.go:56] duration metric: took 22.349625ms for fixHost
	I0731 12:22:01.834857    7965 start.go:83] releasing machines lock for "ha-836000", held for 22.504666ms
	W0731 12:22:01.835045    7965 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-836000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-836000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:22:01.842744    7965 out.go:177] 
	W0731 12:22:01.846725    7965 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:22:01.846792    7965 out.go:239] * 
	* 
	W0731 12:22:01.849418    7965 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:22:01.858695    7965 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-836000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (67.8645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-836000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.920042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-836000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-836000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.933875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-836000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-836000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:22:02.050938    7982 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:22:02.051090    7982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:22:02.051094    7982 out.go:304] Setting ErrFile to fd 2...
	I0731 12:22:02.051096    7982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:22:02.051235    7982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:22:02.051458    7982 mustload.go:65] Loading cluster: ha-836000
	I0731 12:22:02.051645    7982 config.go:182] Loaded profile config "ha-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:22:02.055728    7982 out.go:177] * The control-plane node ha-836000 host is not running: state=Stopped
	I0731 12:22:02.059631    7982 out.go:177]   To start a cluster, run: "minikube start -p ha-836000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-836000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.738292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-836000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-836000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-836000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-836000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-836000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-836000 -n ha-836000: exit status 7 (29.427333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-631000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-631000 --driver=qemu2 : exit status 80 (9.823035166s)

                                                
                                                
-- stdout --
	* [image-631000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-631000" primary control-plane node in "image-631000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-631000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-631000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-631000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-631000 -n image-631000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-631000 -n image-631000: exit status 7 (68.4755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-631000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-665000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-665000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.988619625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4130f969-020e-4a65-8635-290fa3b6fec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-665000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7792bf8e-42eb-46c7-9764-e563da4faccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"1b22ee6b-7b9e-4017-a4a7-3e27e5bd7e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig"}}
	{"specversion":"1.0","id":"f9c8c1bd-7a90-456c-8219-8e4fc2655191","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2e990466-5e34-4e07-ae99-fb3610195349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ffefda6c-4e18-4f36-ba08-c3d92f49edbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube"}}
	{"specversion":"1.0","id":"5b4c75ab-9b4e-473a-ab0c-ec5540abe865","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3b51a13e-7148-4851-b246-aac6b701c479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a08e551f-a7fa-45bc-8440-6e17162a5da0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"10bc3935-2b90-48c1-9dae-2aa450b9418c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-665000\" primary control-plane node in \"json-output-665000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"76ec69f7-739b-4590-a737-8be35d50d2b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0179e8fe-f51e-411e-aaf2-468977cb3d70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-665000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"10a57fe6-dd31-4f33-8b29-227fae9965b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b5cf4692-f07b-4051-becf-5af7eee22068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0f7e9c8d-1c34-4476-9590-b9f75b34b5cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-665000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9bebdd6c-f322-4869-b6db-13e8cceb84e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"76976b02-0a8f-4727-a68f-583185c1ccd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-665000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.99s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-665000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-665000 --output=json --user=testUser: exit status 83 (79.926ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd83ccd5-27b4-45ef-a29e-7c054c2ec202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-665000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"6c89e7e2-3575-4cad-9e1a-01a48e6b4893","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-665000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-665000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-665000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-665000 --output=json --user=testUser: exit status 83 (46.468416ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-665000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-665000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-665000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-665000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-318000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-318000 --driver=qemu2 : exit status 80 (9.764701917s)

                                                
                                                
-- stdout --
	* [first-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-318000" primary control-plane node in "first-318000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-318000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 12:22:35.610526 -0700 PDT m=+472.460571251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-320000 -n second-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-320000 -n second-320000: exit status 85 (80.583458ms)

                                                
                                                
-- stdout --
	* Profile "second-320000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-320000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-320000" host is not running, skipping log retrieval (state="* Profile \"second-320000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-320000\"")
helpers_test.go:175: Cleaning up "second-320000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-320000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 12:22:35.797791 -0700 PDT m=+472.647838543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-318000 -n first-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-318000 -n first-318000: exit status 7 (29.608042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-318000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-318000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-318000
--- FAIL: TestMinikubeProfile (10.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-903000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-903000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.120967917s)

                                                
                                                
-- stdout --
	* [mount-start-1-903000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-903000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-903000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-903000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-903000 -n mount-start-1-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-903000 -n mount-start-1-903000: exit status 7 (69.268417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.19s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-810000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-810000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.892643417s)

                                                
                                                
-- stdout --
	* [multinode-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-810000" primary control-plane node in "multinode-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:22:46.304676    8124 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:22:46.304819    8124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:22:46.304822    8124 out.go:304] Setting ErrFile to fd 2...
	I0731 12:22:46.304824    8124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:22:46.304972    8124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:22:46.306049    8124 out.go:298] Setting JSON to false
	I0731 12:22:46.322147    8124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4935,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:22:46.322221    8124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:22:46.328862    8124 out.go:177] * [multinode-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:22:46.336775    8124 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:22:46.336831    8124 notify.go:220] Checking for updates...
	I0731 12:22:46.343684    8124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:22:46.346801    8124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:22:46.349822    8124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:22:46.351322    8124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:22:46.354788    8124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:22:46.357963    8124 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:22:46.364794    8124 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:22:46.371815    8124 start.go:297] selected driver: qemu2
	I0731 12:22:46.371822    8124 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:22:46.371830    8124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:22:46.374350    8124 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:22:46.375883    8124 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:22:46.378853    8124 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:22:46.378872    8124 cni.go:84] Creating CNI manager for ""
	I0731 12:22:46.378877    8124 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 12:22:46.378892    8124 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:22:46.378926    8124 start.go:340] cluster config:
	{Name:multinode-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:22:46.382926    8124 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:22:46.391704    8124 out.go:177] * Starting "multinode-810000" primary control-plane node in "multinode-810000" cluster
	I0731 12:22:46.395822    8124 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:22:46.395840    8124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:22:46.395855    8124 cache.go:56] Caching tarball of preloaded images
	I0731 12:22:46.395922    8124 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:22:46.395929    8124 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:22:46.396172    8124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/multinode-810000/config.json ...
	I0731 12:22:46.396185    8124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/multinode-810000/config.json: {Name:mkda28e27ee65f7a3a1599b1fc45abe6cc9031ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:22:46.396574    8124 start.go:360] acquireMachinesLock for multinode-810000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:22:46.396610    8124 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "multinode-810000"
	I0731 12:22:46.396621    8124 start.go:93] Provisioning new machine with config: &{Name:multinode-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:22:46.396654    8124 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:22:46.405773    8124 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:22:46.424323    8124 start.go:159] libmachine.API.Create for "multinode-810000" (driver="qemu2")
	I0731 12:22:46.424347    8124 client.go:168] LocalClient.Create starting
	I0731 12:22:46.424412    8124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:22:46.424447    8124 main.go:141] libmachine: Decoding PEM data...
	I0731 12:22:46.424457    8124 main.go:141] libmachine: Parsing certificate...
	I0731 12:22:46.424495    8124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:22:46.424519    8124 main.go:141] libmachine: Decoding PEM data...
	I0731 12:22:46.424528    8124 main.go:141] libmachine: Parsing certificate...
	I0731 12:22:46.424892    8124 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:22:46.574179    8124 main.go:141] libmachine: Creating SSH key...
	I0731 12:22:46.665090    8124 main.go:141] libmachine: Creating Disk image...
	I0731 12:22:46.665095    8124 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:22:46.665323    8124 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:22:46.674490    8124 main.go:141] libmachine: STDOUT: 
	I0731 12:22:46.674507    8124 main.go:141] libmachine: STDERR: 
	I0731 12:22:46.674551    8124 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2 +20000M
	I0731 12:22:46.682340    8124 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:22:46.682369    8124 main.go:141] libmachine: STDERR: 
	I0731 12:22:46.682380    8124 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:22:46.682384    8124 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:22:46.682396    8124 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:22:46.682420    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a8:75:06:c7:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:22:46.684081    8124 main.go:141] libmachine: STDOUT: 
	I0731 12:22:46.684095    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:22:46.684112    8124 client.go:171] duration metric: took 259.76375ms to LocalClient.Create
	I0731 12:22:48.686277    8124 start.go:128] duration metric: took 2.289638541s to createHost
	I0731 12:22:48.686322    8124 start.go:83] releasing machines lock for "multinode-810000", held for 2.2897385s
	W0731 12:22:48.686380    8124 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:22:48.693641    8124 out.go:177] * Deleting "multinode-810000" in qemu2 ...
	W0731 12:22:48.721355    8124 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:22:48.721376    8124 start.go:729] Will try again in 5 seconds ...
	I0731 12:22:53.723468    8124 start.go:360] acquireMachinesLock for multinode-810000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:22:53.723906    8124 start.go:364] duration metric: took 351.917µs to acquireMachinesLock for "multinode-810000"
	I0731 12:22:53.724033    8124 start.go:93] Provisioning new machine with config: &{Name:multinode-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:22:53.724337    8124 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:22:53.739143    8124 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:22:53.788917    8124 start.go:159] libmachine.API.Create for "multinode-810000" (driver="qemu2")
	I0731 12:22:53.788992    8124 client.go:168] LocalClient.Create starting
	I0731 12:22:53.789113    8124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:22:53.789168    8124 main.go:141] libmachine: Decoding PEM data...
	I0731 12:22:53.789184    8124 main.go:141] libmachine: Parsing certificate...
	I0731 12:22:53.789269    8124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:22:53.789313    8124 main.go:141] libmachine: Decoding PEM data...
	I0731 12:22:53.789323    8124 main.go:141] libmachine: Parsing certificate...
	I0731 12:22:53.789824    8124 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:22:53.949684    8124 main.go:141] libmachine: Creating SSH key...
	I0731 12:22:54.097950    8124 main.go:141] libmachine: Creating Disk image...
	I0731 12:22:54.097961    8124 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:22:54.098182    8124 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:22:54.107874    8124 main.go:141] libmachine: STDOUT: 
	I0731 12:22:54.107898    8124 main.go:141] libmachine: STDERR: 
	I0731 12:22:54.107952    8124 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2 +20000M
	I0731 12:22:54.115846    8124 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:22:54.115865    8124 main.go:141] libmachine: STDERR: 
	I0731 12:22:54.115875    8124 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:22:54.115879    8124 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:22:54.115892    8124 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:22:54.115917    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:53:52:a0:4e:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:22:54.117621    8124 main.go:141] libmachine: STDOUT: 
	I0731 12:22:54.117649    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:22:54.117660    8124 client.go:171] duration metric: took 328.667625ms to LocalClient.Create
	I0731 12:22:56.119882    8124 start.go:128] duration metric: took 2.395555625s to createHost
	I0731 12:22:56.119932    8124 start.go:83] releasing machines lock for "multinode-810000", held for 2.396036667s
	W0731 12:22:56.120272    8124 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:22:56.136826    8124 out.go:177] 
	W0731 12:22:56.141014    8124 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:22:56.141059    8124 out.go:239] * 
	* 
	W0731 12:22:56.143304    8124 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:22:56.155872    8124 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-810000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (65.203791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (105.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.282959ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-810000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- rollout status deployment/busybox: exit status 1 (56.476667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.597167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.618375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.911209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.3025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.009458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.040917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.53875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.614458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.633ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.5795ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.680542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.003542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.346791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.1445ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.454333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (30.031667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (105.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-810000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.2185ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (29.618625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-810000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-810000 -v 3 --alsologtostderr: exit status 83 (43.005167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-810000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-810000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:42.306314    8208 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:42.306699    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.306704    8208 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:42.306707    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.306898    8208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:42.307173    8208 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:42.307521    8208 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:42.312605    8208 out.go:177] * The control-plane node multinode-810000 host is not running: state=Stopped
	I0731 12:24:42.316619    8208 out.go:177]   To start a cluster, run: "minikube start -p multinode-810000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-810000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (29.406375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-810000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-810000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.259667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-810000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-810000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-810000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (30.623916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-810000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-810000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-810000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-810000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (29.869916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status --output json --alsologtostderr: exit status 7 (29.172458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-810000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:42.511822    8220 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:42.511972    8220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.511975    8220 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:42.511977    8220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.512110    8220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:42.512228    8220 out.go:298] Setting JSON to true
	I0731 12:24:42.512237    8220 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:42.512305    8220 notify.go:220] Checking for updates...
	I0731 12:24:42.512434    8220 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:42.512439    8220 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:42.512642    8220 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:42.512646    8220 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:42.512648    8220 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-810000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (29.851542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 node stop m03: exit status 85 (45.529ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-810000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status: exit status 7 (29.203125ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr: exit status 7 (30.409167ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:42.647591    8228 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:42.647756    8228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.647759    8228 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:42.647761    8228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.647889    8228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:42.648005    8228 out.go:298] Setting JSON to false
	I0731 12:24:42.648014    8228 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:42.648065    8228 notify.go:220] Checking for updates...
	I0731 12:24:42.648225    8228 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:42.648231    8228 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:42.648442    8228 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:42.648445    8228 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:42.648448    8228 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr": multinode-810000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (30.317834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.849792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:42.707583    8232 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:42.708041    8232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.708044    8232 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:42.708047    8232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.708205    8232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:42.708428    8232 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:42.708605    8232 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:42.712997    8232 out.go:177] 
	W0731 12:24:42.716998    8232 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0731 12:24:42.717003    8232 out.go:239] * 
	* 
	W0731 12:24:42.718974    8232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:24:42.722964    8232 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0731 12:24:42.707583    8232 out.go:291] Setting OutFile to fd 1 ...
I0731 12:24:42.708041    8232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:24:42.708044    8232 out.go:304] Setting ErrFile to fd 2...
I0731 12:24:42.708047    8232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:24:42.708205    8232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
I0731 12:24:42.708428    8232 mustload.go:65] Loading cluster: multinode-810000
I0731 12:24:42.708605    8232 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:24:42.712997    8232 out.go:177] 
W0731 12:24:42.716998    8232 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0731 12:24:42.717003    8232 out.go:239] * 
* 
W0731 12:24:42.718974    8232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:24:42.722964    8232 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-810000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (30.262834ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:42.756496    8234 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:42.756624    8234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.756627    8234 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:42.756630    8234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:42.756776    8234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:42.756917    8234 out.go:298] Setting JSON to false
	I0731 12:24:42.756926    8234 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:42.756987    8234 notify.go:220] Checking for updates...
	I0731 12:24:42.757129    8234 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:42.757135    8234 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:42.757340    8234 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:42.757344    8234 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:42.757346    8234 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (74.928084ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:43.805629    8236 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:43.805832    8236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:43.805836    8236 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:43.805839    8236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:43.806033    8236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:43.806233    8236 out.go:298] Setting JSON to false
	I0731 12:24:43.806246    8236 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:43.806283    8236 notify.go:220] Checking for updates...
	I0731 12:24:43.806537    8236 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:43.806549    8236 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:43.806849    8236 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:43.806855    8236 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:43.806859    8236 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (74.133417ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:45.499908    8238 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:45.500107    8238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:45.500111    8238 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:45.500114    8238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:45.500324    8238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:45.500507    8238 out.go:298] Setting JSON to false
	I0731 12:24:45.500519    8238 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:45.500559    8238 notify.go:220] Checking for updates...
	I0731 12:24:45.500793    8238 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:45.500801    8238 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:45.501074    8238 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:45.501079    8238 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:45.501082    8238 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (73.79825ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:47.317281    8240 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:47.317440    8240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:47.317444    8240 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:47.317447    8240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:47.317612    8240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:47.317781    8240 out.go:298] Setting JSON to false
	I0731 12:24:47.317793    8240 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:47.317838    8240 notify.go:220] Checking for updates...
	I0731 12:24:47.318054    8240 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:47.318063    8240 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:47.318321    8240 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:47.318326    8240 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:47.318329    8240 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (73.916667ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:52.293432    8242 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:52.293609    8242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:52.293613    8242 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:52.293616    8242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:52.293811    8242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:52.293974    8242 out.go:298] Setting JSON to false
	I0731 12:24:52.293984    8242 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:52.294031    8242 notify.go:220] Checking for updates...
	I0731 12:24:52.294264    8242 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:52.294272    8242 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:52.294537    8242 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:52.294542    8242 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:52.294545    8242 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (72.643583ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:56.821750    8245 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:56.821995    8245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:56.822000    8245 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:56.822004    8245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:56.822186    8245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:24:56.822369    8245 out.go:298] Setting JSON to false
	I0731 12:24:56.822381    8245 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:24:56.822414    8245 notify.go:220] Checking for updates...
	I0731 12:24:56.822654    8245 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:56.822661    8245 status.go:255] checking status of multinode-810000 ...
	I0731 12:24:56.822966    8245 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:24:56.822971    8245 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:56.822974    8245 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (73.556833ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:04.157526    8247 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:04.157738    8247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:04.157742    8247 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:04.157749    8247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:04.157945    8247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:25:04.158110    8247 out.go:298] Setting JSON to false
	I0731 12:25:04.158121    8247 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:25:04.158151    8247 notify.go:220] Checking for updates...
	I0731 12:25:04.158396    8247 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:04.158403    8247 status.go:255] checking status of multinode-810000 ...
	I0731 12:25:04.158688    8247 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:25:04.158694    8247 status.go:343] host is not running, skipping remaining checks
	I0731 12:25:04.158697    8247 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (77.424042ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:17.897481    8252 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:17.897708    8252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:17.897713    8252 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:17.897716    8252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:17.897882    8252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:25:17.898047    8252 out.go:298] Setting JSON to false
	I0731 12:25:17.898060    8252 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:25:17.898106    8252 notify.go:220] Checking for updates...
	I0731 12:25:17.898312    8252 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:17.898321    8252 status.go:255] checking status of multinode-810000 ...
	I0731 12:25:17.898627    8252 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:25:17.898632    8252 status.go:343] host is not running, skipping remaining checks
	I0731 12:25:17.898635    8252 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr: exit status 7 (73.064416ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:33.817329    8256 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:33.817547    8256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:33.817552    8256 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:33.817556    8256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:33.817728    8256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:25:33.817909    8256 out.go:298] Setting JSON to false
	I0731 12:25:33.817923    8256 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:25:33.817951    8256 notify.go:220] Checking for updates...
	I0731 12:25:33.818199    8256 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:33.818206    8256 status.go:255] checking status of multinode-810000 ...
	I0731 12:25:33.818471    8256 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:25:33.818476    8256 status.go:343] host is not running, skipping remaining checks
	I0731 12:25:33.818479    8256 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-810000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (33.837333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-810000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-810000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-810000: (3.84579125s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-810000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-810000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.209902583s)

                                                
                                                
-- stdout --
	* [multinode-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-810000" primary control-plane node in "multinode-810000" cluster
	* Restarting existing qemu2 VM for "multinode-810000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-810000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:37.789692    8282 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:37.789841    8282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:37.789845    8282 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:37.789847    8282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:37.790000    8282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:25:37.791171    8282 out.go:298] Setting JSON to false
	I0731 12:25:37.809797    8282 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5106,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:25:37.809883    8282 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:25:37.814047    8282 out.go:177] * [multinode-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:25:37.822097    8282 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:25:37.822153    8282 notify.go:220] Checking for updates...
	I0731 12:25:37.827055    8282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:25:37.830142    8282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:25:37.831472    8282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:25:37.834123    8282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:25:37.837149    8282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:25:37.840457    8282 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:37.840504    8282 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:25:37.845044    8282 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:25:37.852147    8282 start.go:297] selected driver: qemu2
	I0731 12:25:37.852161    8282 start.go:901] validating driver "qemu2" against &{Name:multinode-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:37.852241    8282 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:25:37.854562    8282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:25:37.854602    8282 cni.go:84] Creating CNI manager for ""
	I0731 12:25:37.854606    8282 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:25:37.854656    8282 start.go:340] cluster config:
	{Name:multinode-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-810000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:37.858202    8282 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:37.867129    8282 out.go:177] * Starting "multinode-810000" primary control-plane node in "multinode-810000" cluster
	I0731 12:25:37.871122    8282 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:25:37.871139    8282 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:25:37.871155    8282 cache.go:56] Caching tarball of preloaded images
	I0731 12:25:37.871223    8282 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:25:37.871229    8282 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:25:37.871286    8282 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/multinode-810000/config.json ...
	I0731 12:25:37.871718    8282 start.go:360] acquireMachinesLock for multinode-810000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:37.871754    8282 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "multinode-810000"
	I0731 12:25:37.871763    8282 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:25:37.871768    8282 fix.go:54] fixHost starting: 
	I0731 12:25:37.871890    8282 fix.go:112] recreateIfNeeded on multinode-810000: state=Stopped err=<nil>
	W0731 12:25:37.871898    8282 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:25:37.875171    8282 out.go:177] * Restarting existing qemu2 VM for "multinode-810000" ...
	I0731 12:25:37.879054    8282 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:37.879099    8282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:53:52:a0:4e:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:25:37.881192    8282 main.go:141] libmachine: STDOUT: 
	I0731 12:25:37.881210    8282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:37.881239    8282 fix.go:56] duration metric: took 9.472333ms for fixHost
	I0731 12:25:37.881244    8282 start.go:83] releasing machines lock for "multinode-810000", held for 9.485083ms
	W0731 12:25:37.881259    8282 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:37.881305    8282 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:37.881310    8282 start.go:729] Will try again in 5 seconds ...
	I0731 12:25:42.882583    8282 start.go:360] acquireMachinesLock for multinode-810000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:42.882918    8282 start.go:364] duration metric: took 220.125µs to acquireMachinesLock for "multinode-810000"
	I0731 12:25:42.883011    8282 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:25:42.883026    8282 fix.go:54] fixHost starting: 
	I0731 12:25:42.883624    8282 fix.go:112] recreateIfNeeded on multinode-810000: state=Stopped err=<nil>
	W0731 12:25:42.883649    8282 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:25:42.890969    8282 out.go:177] * Restarting existing qemu2 VM for "multinode-810000" ...
	I0731 12:25:42.895073    8282 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:42.895197    8282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:53:52:a0:4e:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:25:42.903964    8282 main.go:141] libmachine: STDOUT: 
	I0731 12:25:42.904028    8282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:42.904099    8282 fix.go:56] duration metric: took 21.071084ms for fixHost
	I0731 12:25:42.904116    8282 start.go:83] releasing machines lock for "multinode-810000", held for 21.178666ms
	W0731 12:25:42.904333    8282 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-810000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-810000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:42.912037    8282 out.go:177] 
	W0731 12:25:42.916145    8282 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:42.916168    8282 out.go:239] * 
	* 
	W0731 12:25:42.918682    8282 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:25:42.926001    8282 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-810000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-810000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (33.574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 node delete m03: exit status 83 (42.655625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-810000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-810000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-810000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr: exit status 7 (29.936416ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:43.114846    8298 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:43.114989    8298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:43.114992    8298 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:43.114994    8298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:43.115140    8298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:25:43.115248    8298 out.go:298] Setting JSON to false
	I0731 12:25:43.115261    8298 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:25:43.115326    8298 notify.go:220] Checking for updates...
	I0731 12:25:43.115468    8298 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:43.115474    8298 status.go:255] checking status of multinode-810000 ...
	I0731 12:25:43.115672    8298 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:25:43.115676    8298 status.go:343] host is not running, skipping remaining checks
	I0731 12:25:43.115678    8298 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (29.961458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (1.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-810000 stop: (1.770485334s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status: exit status 7 (68.6335ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr: exit status 7 (33.37725ms)

                                                
                                                
-- stdout --
	multinode-810000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:45.017947    8314 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:45.018093    8314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:45.018096    8314 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:45.018098    8314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:45.018236    8314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:25:45.018356    8314 out.go:298] Setting JSON to false
	I0731 12:25:45.018371    8314 mustload.go:65] Loading cluster: multinode-810000
	I0731 12:25:45.018409    8314 notify.go:220] Checking for updates...
	I0731 12:25:45.018556    8314 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:45.018562    8314 status.go:255] checking status of multinode-810000 ...
	I0731 12:25:45.018764    8314 status.go:330] multinode-810000 host status = "Stopped" (err=<nil>)
	I0731 12:25:45.018768    8314 status.go:343] host is not running, skipping remaining checks
	I0731 12:25:45.018770    8314 status.go:257] multinode-810000 status: &{Name:multinode-810000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr": multinode-810000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-810000 status --alsologtostderr": multinode-810000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (29.9735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (1.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-810000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-810000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.187418166s)

                                                
                                                
-- stdout --
	* [multinode-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-810000" primary control-plane node in "multinode-810000" cluster
	* Restarting existing qemu2 VM for "multinode-810000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-810000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:45.077864    8318 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:45.078000    8318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:45.078004    8318 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:45.078006    8318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:45.078131    8318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:25:45.079143    8318 out.go:298] Setting JSON to false
	I0731 12:25:45.095200    8318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5114,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:25:45.095282    8318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:25:45.098968    8318 out.go:177] * [multinode-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:25:45.107057    8318 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:25:45.107124    8318 notify.go:220] Checking for updates...
	I0731 12:25:45.114941    8318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:25:45.119044    8318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:25:45.122068    8318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:25:45.125058    8318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:25:45.128039    8318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:25:45.131272    8318 config.go:182] Loaded profile config "multinode-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:45.131539    8318 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:25:45.136028    8318 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:25:45.142923    8318 start.go:297] selected driver: qemu2
	I0731 12:25:45.142930    8318 start.go:901] validating driver "qemu2" against &{Name:multinode-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:45.142985    8318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:25:45.145348    8318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:25:45.145392    8318 cni.go:84] Creating CNI manager for ""
	I0731 12:25:45.145397    8318 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:25:45.145439    8318 start.go:340] cluster config:
	{Name:multinode-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-810000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:45.149076    8318 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:45.156998    8318 out.go:177] * Starting "multinode-810000" primary control-plane node in "multinode-810000" cluster
	I0731 12:25:45.160992    8318 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:25:45.161005    8318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:25:45.161013    8318 cache.go:56] Caching tarball of preloaded images
	I0731 12:25:45.161059    8318 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:25:45.161064    8318 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:25:45.161113    8318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/multinode-810000/config.json ...
	I0731 12:25:45.161518    8318 start.go:360] acquireMachinesLock for multinode-810000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:45.161552    8318 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "multinode-810000"
	I0731 12:25:45.161560    8318 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:25:45.161567    8318 fix.go:54] fixHost starting: 
	I0731 12:25:45.161682    8318 fix.go:112] recreateIfNeeded on multinode-810000: state=Stopped err=<nil>
	W0731 12:25:45.161690    8318 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:25:45.169965    8318 out.go:177] * Restarting existing qemu2 VM for "multinode-810000" ...
	I0731 12:25:45.173965    8318 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:45.174004    8318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:53:52:a0:4e:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:25:45.175990    8318 main.go:141] libmachine: STDOUT: 
	I0731 12:25:45.176008    8318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:45.176036    8318 fix.go:56] duration metric: took 14.47125ms for fixHost
	I0731 12:25:45.176042    8318 start.go:83] releasing machines lock for "multinode-810000", held for 14.485792ms
	W0731 12:25:45.176049    8318 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:45.176078    8318 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:45.176084    8318 start.go:729] Will try again in 5 seconds ...
	I0731 12:25:50.178131    8318 start.go:360] acquireMachinesLock for multinode-810000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:50.178498    8318 start.go:364] duration metric: took 280.458µs to acquireMachinesLock for "multinode-810000"
	I0731 12:25:50.178619    8318 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:25:50.178636    8318 fix.go:54] fixHost starting: 
	I0731 12:25:50.179295    8318 fix.go:112] recreateIfNeeded on multinode-810000: state=Stopped err=<nil>
	W0731 12:25:50.179321    8318 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:25:50.187793    8318 out.go:177] * Restarting existing qemu2 VM for "multinode-810000" ...
	I0731 12:25:50.191612    8318 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:50.191855    8318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:53:52:a0:4e:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/multinode-810000/disk.qcow2
	I0731 12:25:50.200985    8318 main.go:141] libmachine: STDOUT: 
	I0731 12:25:50.201090    8318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:50.201165    8318 fix.go:56] duration metric: took 22.530042ms for fixHost
	I0731 12:25:50.201192    8318 start.go:83] releasing machines lock for "multinode-810000", held for 22.675041ms
	W0731 12:25:50.201395    8318 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-810000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-810000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:50.208573    8318 out.go:177] 
	W0731 12:25:50.212943    8318 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:50.212967    8318 out.go:239] * 
	* 
	W0731 12:25:50.215333    8318 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:25:50.223750    8318 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-810000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (70.920792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-810000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-810000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-810000-m01 --driver=qemu2 : exit status 80 (9.898081916s)

                                                
                                                
-- stdout --
	* [multinode-810000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-810000-m01" primary control-plane node in "multinode-810000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-810000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-810000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-810000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-810000-m02 --driver=qemu2 : exit status 80 (10.067758375s)

                                                
                                                
-- stdout --
	* [multinode-810000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-810000-m02" primary control-plane node in "multinode-810000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-810000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-810000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-810000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-810000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-810000: exit status 83 (78.6825ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-810000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-810000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-810000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-810000 -n multinode-810000: exit status 7 (29.841125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.19s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-870000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-870000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.886439334s)

                                                
                                                
-- stdout --
	* [test-preload-870000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-870000" primary control-plane node in "test-preload-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:26:10.634200    8373 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:26:10.634333    8373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:10.634337    8373 out.go:304] Setting ErrFile to fd 2...
	I0731 12:26:10.634339    8373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:10.634510    8373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:26:10.635568    8373 out.go:298] Setting JSON to false
	I0731 12:26:10.651568    8373 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5139,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:26:10.651646    8373 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:26:10.657871    8373 out.go:177] * [test-preload-870000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:26:10.665771    8373 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:26:10.665808    8373 notify.go:220] Checking for updates...
	I0731 12:26:10.673789    8373 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:26:10.677849    8373 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:26:10.681803    8373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:26:10.684825    8373 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:26:10.687831    8373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:26:10.691212    8373 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:26:10.691261    8373 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:26:10.695822    8373 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:26:10.702777    8373 start.go:297] selected driver: qemu2
	I0731 12:26:10.702785    8373 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:26:10.702791    8373 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:26:10.705435    8373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:26:10.709854    8373 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:26:10.712916    8373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:26:10.712949    8373 cni.go:84] Creating CNI manager for ""
	I0731 12:26:10.712957    8373 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:26:10.712964    8373 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:26:10.712996    8373 start.go:340] cluster config:
	{Name:test-preload-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:26:10.717135    8373 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.725799    8373 out.go:177] * Starting "test-preload-870000" primary control-plane node in "test-preload-870000" cluster
	I0731 12:26:10.729809    8373 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0731 12:26:10.729901    8373 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/test-preload-870000/config.json ...
	I0731 12:26:10.729918    8373 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/test-preload-870000/config.json: {Name:mkbba4e49bf904ddde8409cdde1a3f1f8e13d973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:26:10.729951    8373 cache.go:107] acquiring lock: {Name:mk2ef30d61cd7b3b2c45707f04664ba550fd89aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.729956    8373 cache.go:107] acquiring lock: {Name:mk10bab3ff77fd3779a4db414b75b0c5e6f4613a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.729975    8373 cache.go:107] acquiring lock: {Name:mkf08d1a5c5d25c28906e8a4c06b81fddc4dbab6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.729970    8373 cache.go:107] acquiring lock: {Name:mk79dbdb5a8e21002c34ad8b107a69b03f0dc253 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.729991    8373 cache.go:107] acquiring lock: {Name:mkdda4259762c6a8a6f6f8dd313cc2bb73d4fb48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.729988    8373 cache.go:107] acquiring lock: {Name:mkd30a7a4e8ef05e856cc10803f6c4c0b01513e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.730010    8373 cache.go:107] acquiring lock: {Name:mkbf2a1bac223f730e17b04f9051f362f02ddb78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.730010    8373 cache.go:107] acquiring lock: {Name:mk3968ecbac1838e7881259402def2e95fae048d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:10.730416    8373 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 12:26:10.730436    8373 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:26:10.730455    8373 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 12:26:10.730524    8373 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 12:26:10.730571    8373 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 12:26:10.730586    8373 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:26:10.730587    8373 start.go:360] acquireMachinesLock for test-preload-870000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:10.730650    8373 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:26:10.730650    8373 start.go:364] duration metric: took 42.334µs to acquireMachinesLock for "test-preload-870000"
	I0731 12:26:10.730690    8373 start.go:93] Provisioning new machine with config: &{Name:test-preload-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:26:10.730752    8373 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:26:10.730807    8373 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:26:10.737770    8373 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:26:10.742221    8373 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:26:10.742307    8373 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 12:26:10.742467    8373 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 12:26:10.742677    8373 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 12:26:10.742687    8373 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:26:10.744523    8373 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:26:10.744543    8373 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:26:10.744642    8373 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 12:26:10.756766    8373 start.go:159] libmachine.API.Create for "test-preload-870000" (driver="qemu2")
	I0731 12:26:10.756797    8373 client.go:168] LocalClient.Create starting
	I0731 12:26:10.756909    8373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:26:10.756948    8373 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:10.756958    8373 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:10.757005    8373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:26:10.757031    8373 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:10.757041    8373 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:10.757477    8373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:26:10.917084    8373 main.go:141] libmachine: Creating SSH key...
	I0731 12:26:11.041156    8373 main.go:141] libmachine: Creating Disk image...
	I0731 12:26:11.041177    8373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:26:11.041422    8373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2
	I0731 12:26:11.051478    8373 main.go:141] libmachine: STDOUT: 
	I0731 12:26:11.051496    8373 main.go:141] libmachine: STDERR: 
	I0731 12:26:11.051568    8373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2 +20000M
	I0731 12:26:11.060479    8373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:26:11.060495    8373 main.go:141] libmachine: STDERR: 
	I0731 12:26:11.060508    8373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2
	I0731 12:26:11.060513    8373 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:26:11.060524    8373 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:11.060547    8373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:37:66:0e:3d:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2
	I0731 12:26:11.062368    8373 main.go:141] libmachine: STDOUT: 
	I0731 12:26:11.062381    8373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:26:11.062396    8373 client.go:171] duration metric: took 305.599333ms to LocalClient.Create
	I0731 12:26:11.135008    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0731 12:26:11.140714    8373 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:26:11.140733    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:26:11.155839    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 12:26:11.160934    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 12:26:11.164886    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:26:11.219476    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:26:11.276911    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 12:26:11.298635    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0731 12:26:11.298682    8373 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 568.70825ms
	I0731 12:26:11.298715    8373 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0731 12:26:11.592989    8373 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:26:11.593112    8373 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:26:11.817436    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:26:11.817517    8373 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.087580375s
	I0731 12:26:11.817547    8373 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:26:13.062693    8373 start.go:128] duration metric: took 2.3319525s to createHost
	I0731 12:26:13.062748    8373 start.go:83] releasing machines lock for "test-preload-870000", held for 2.332098708s
	W0731 12:26:13.062815    8373 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:13.070755    8373 out.go:177] * Deleting "test-preload-870000" in qemu2 ...
	W0731 12:26:13.102322    8373 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:13.102350    8373 start.go:729] Will try again in 5 seconds ...
	I0731 12:26:13.982321    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0731 12:26:13.982378    8373 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.252432s
	I0731 12:26:13.982407    8373 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0731 12:26:14.059603    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0731 12:26:14.059643    8373 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.329700667s
	I0731 12:26:14.059696    8373 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0731 12:26:14.609090    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0731 12:26:14.609132    8373 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 3.879235s
	I0731 12:26:14.609156    8373 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0731 12:26:15.391139    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0731 12:26:15.391206    8373 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.661264667s
	I0731 12:26:15.391230    8373 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0731 12:26:15.618385    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0731 12:26:15.618426    8373 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.888555291s
	I0731 12:26:15.618452    8373 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0731 12:26:18.102711    8373 start.go:360] acquireMachinesLock for test-preload-870000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:18.103109    8373 start.go:364] duration metric: took 324.958µs to acquireMachinesLock for "test-preload-870000"
	I0731 12:26:18.103239    8373 start.go:93] Provisioning new machine with config: &{Name:test-preload-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:26:18.103532    8373 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:26:18.115178    8373 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:26:18.166805    8373 start.go:159] libmachine.API.Create for "test-preload-870000" (driver="qemu2")
	I0731 12:26:18.166861    8373 client.go:168] LocalClient.Create starting
	I0731 12:26:18.166994    8373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:26:18.167057    8373 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:18.167082    8373 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:18.167150    8373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:26:18.167193    8373 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:18.167206    8373 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:18.167697    8373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:26:18.326152    8373 main.go:141] libmachine: Creating SSH key...
	I0731 12:26:18.427122    8373 main.go:141] libmachine: Creating Disk image...
	I0731 12:26:18.427130    8373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:26:18.427356    8373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2
	I0731 12:26:18.436927    8373 main.go:141] libmachine: STDOUT: 
	I0731 12:26:18.436946    8373 main.go:141] libmachine: STDERR: 
	I0731 12:26:18.436995    8373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2 +20000M
	I0731 12:26:18.445012    8373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:26:18.445028    8373 main.go:141] libmachine: STDERR: 
	I0731 12:26:18.445037    8373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2
	I0731 12:26:18.445043    8373 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:26:18.445053    8373 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:18.445090    8373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:00:ce:ab:96:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/test-preload-870000/disk.qcow2
	I0731 12:26:18.446778    8373 main.go:141] libmachine: STDOUT: 
	I0731 12:26:18.446793    8373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:26:18.446807    8373 client.go:171] duration metric: took 279.943ms to LocalClient.Create
	I0731 12:26:19.745471    8373 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0731 12:26:19.745551    8373 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.015714083s
	I0731 12:26:19.745574    8373 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0731 12:26:19.745617    8373 cache.go:87] Successfully saved all images to host disk.
	I0731 12:26:20.449082    8373 start.go:128] duration metric: took 2.345550375s to createHost
	I0731 12:26:20.449155    8373 start.go:83] releasing machines lock for "test-preload-870000", held for 2.346058s
	W0731 12:26:20.449459    8373 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:20.463002    8373 out.go:177] 
	W0731 12:26:20.467158    8373 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:26:20.467237    8373 out.go:239] * 
	* 
	W0731 12:26:20.469826    8373 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:26:20.479061    8373 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-870000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-31 12:26:20.495671 -0700 PDT m=+697.349301251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-870000 -n test-preload-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-870000 -n test-preload-870000: exit status 7 (66.587583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-870000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-870000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-870000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (9.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-248000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-248000 --memory=2048 --driver=qemu2 : exit status 80 (9.784769542s)

                                                
                                                
-- stdout --
	* [scheduled-stop-248000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-248000" primary control-plane node in "scheduled-stop-248000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-248000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-248000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-248000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-248000" primary control-plane node in "scheduled-stop-248000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-248000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-248000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-31 12:26:30.424465 -0700 PDT m=+707.278253626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-248000 -n scheduled-stop-248000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-248000 -n scheduled-stop-248000: exit status 7 (67.702958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-248000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-248000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-248000
--- FAIL: TestScheduledStopUnix (9.93s)

                                                
                                    
x
+
TestSkaffold (12.14s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3717433370 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3717433370 version: (1.068783541s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-215000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-215000 --memory=2600 --driver=qemu2 : exit status 80 (9.851359459s)

                                                
                                                
-- stdout --
	* [skaffold-215000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-215000" primary control-plane node in "skaffold-215000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-215000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-215000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-215000" primary control-plane node in "skaffold-215000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-215000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-31 12:26:42.564877 -0700 PDT m=+719.418859084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-215000 -n skaffold-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-215000 -n skaffold-215000: exit status 7 (63.240292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-215000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-215000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-215000
--- FAIL: TestSkaffold (12.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (621.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3819673163 start -p running-upgrade-568000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3819673163 start -p running-upgrade-568000 --memory=2200 --vm-driver=qemu2 : (1m1.780287833s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-568000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-568000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m44.674846666s)

                                                
                                                
-- stdout --
	* [running-upgrade-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-568000" primary control-plane node in "running-upgrade-568000" cluster
	* Updating the running qemu2 "running-upgrade-568000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:28:06.359308    8683 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:28:06.359429    8683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:06.359433    8683 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:06.359436    8683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:06.359593    8683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:28:06.360737    8683 out.go:298] Setting JSON to false
	I0731 12:28:06.378200    8683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5255,"bootTime":1722448831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:28:06.378295    8683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:28:06.383225    8683 out.go:177] * [running-upgrade-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:28:06.391242    8683 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:28:06.391273    8683 notify.go:220] Checking for updates...
	I0731 12:28:06.398211    8683 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:28:06.402233    8683 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:28:06.405241    8683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:28:06.408232    8683 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:28:06.411231    8683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:28:06.414443    8683 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:28:06.417134    8683 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:28:06.420207    8683 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:28:06.423204    8683 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:28:06.430187    8683 start.go:297] selected driver: qemu2
	I0731 12:28:06.430194    8683 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:06.430246    8683 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:28:06.432471    8683 cni.go:84] Creating CNI manager for ""
	I0731 12:28:06.432489    8683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:28:06.432519    8683 start.go:340] cluster config:
	{Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:06.432567    8683 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:28:06.440231    8683 out.go:177] * Starting "running-upgrade-568000" primary control-plane node in "running-upgrade-568000" cluster
	I0731 12:28:06.444202    8683 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:28:06.444216    8683 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:28:06.444227    8683 cache.go:56] Caching tarball of preloaded images
	I0731 12:28:06.444270    8683 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:28:06.444275    8683 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:28:06.444321    8683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/config.json ...
	I0731 12:28:06.444691    8683 start.go:360] acquireMachinesLock for running-upgrade-568000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:28:18.173917    8683 start.go:364] duration metric: took 11.72940225s to acquireMachinesLock for "running-upgrade-568000"
	I0731 12:28:18.173943    8683 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:28:18.173953    8683 fix.go:54] fixHost starting: 
	I0731 12:28:18.174747    8683 fix.go:112] recreateIfNeeded on running-upgrade-568000: state=Running err=<nil>
	W0731 12:28:18.174756    8683 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:28:18.177733    8683 out.go:177] * Updating the running qemu2 "running-upgrade-568000" VM ...
	I0731 12:28:18.184588    8683 machine.go:94] provisionDockerMachine start ...
	I0731 12:28:18.184642    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.184769    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.184774    8683 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:28:18.241381    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-568000
	
	I0731 12:28:18.241398    8683 buildroot.go:166] provisioning hostname "running-upgrade-568000"
	I0731 12:28:18.241439    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.241559    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.241566    8683 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-568000 && echo "running-upgrade-568000" | sudo tee /etc/hostname
	I0731 12:28:18.302939    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-568000
	
	I0731 12:28:18.302993    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.303110    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.303118    8683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-568000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-568000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-568000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:28:18.369370    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:28:18.369385    8683 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19360-6578/.minikube CaCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19360-6578/.minikube}
	I0731 12:28:18.369394    8683 buildroot.go:174] setting up certificates
	I0731 12:28:18.369398    8683 provision.go:84] configureAuth start
	I0731 12:28:18.369410    8683 provision.go:143] copyHostCerts
	I0731 12:28:18.369516    8683 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem, removing ...
	I0731 12:28:18.369527    8683 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem
	I0731 12:28:18.369657    8683 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem (1078 bytes)
	I0731 12:28:18.369837    8683 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem, removing ...
	I0731 12:28:18.369842    8683 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem
	I0731 12:28:18.369891    8683 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem (1123 bytes)
	I0731 12:28:18.369991    8683 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem, removing ...
	I0731 12:28:18.369995    8683 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem
	I0731 12:28:18.370035    8683 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem (1679 bytes)
	I0731 12:28:18.370169    8683 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-568000 san=[127.0.0.1 localhost minikube running-upgrade-568000]
	I0731 12:28:18.592139    8683 provision.go:177] copyRemoteCerts
	I0731 12:28:18.592194    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:28:18.592204    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:28:18.624530    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 12:28:18.631644    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:28:18.647040    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:28:18.658117    8683 provision.go:87] duration metric: took 288.717416ms to configureAuth
	I0731 12:28:18.658149    8683 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:28:18.658277    8683 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:28:18.658318    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.658412    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.658418    8683 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:28:18.714278    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:28:18.714290    8683 buildroot.go:70] root file system type: tmpfs
	I0731 12:28:18.714351    8683 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:28:18.714407    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.714529    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.714562    8683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:28:18.773381    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:28:18.773443    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.773568    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.773577    8683 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:28:18.830650    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:28:18.830660    8683 machine.go:97] duration metric: took 646.076875ms to provisionDockerMachine
	I0731 12:28:18.830666    8683 start.go:293] postStartSetup for "running-upgrade-568000" (driver="qemu2")
	I0731 12:28:18.830671    8683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:28:18.830719    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:28:18.830728    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:28:18.867778    8683 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:28:18.870337    8683 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:28:18.870345    8683 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/addons for local assets ...
	I0731 12:28:18.870420    8683 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/files for local assets ...
	I0731 12:28:18.870507    8683 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem -> 70682.pem in /etc/ssl/certs
	I0731 12:28:18.870603    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:28:18.874658    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:18.883113    8683 start.go:296] duration metric: took 52.441041ms for postStartSetup
	I0731 12:28:18.883132    8683 fix.go:56] duration metric: took 709.196292ms for fixHost
	I0731 12:28:18.883185    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.883324    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.883330    8683 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 12:28:18.945275    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454099.311756363
	
	I0731 12:28:18.945284    8683 fix.go:216] guest clock: 1722454099.311756363
	I0731 12:28:18.945287    8683 fix.go:229] Guest: 2024-07-31 12:28:19.311756363 -0700 PDT Remote: 2024-07-31 12:28:18.883134 -0700 PDT m=+12.543945335 (delta=428.622363ms)
	I0731 12:28:18.945298    8683 fix.go:200] guest clock delta is within tolerance: 428.622363ms
	I0731 12:28:18.945303    8683 start.go:83] releasing machines lock for "running-upgrade-568000", held for 771.387333ms
	I0731 12:28:18.945379    8683 ssh_runner.go:195] Run: cat /version.json
	I0731 12:28:18.945390    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:28:18.945379    8683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:28:18.945427    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	W0731 12:28:18.974217    8683 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:28:18.974280    8683 ssh_runner.go:195] Run: systemctl --version
	I0731 12:28:18.976636    8683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:28:18.980713    8683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:28:18.980764    8683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:28:18.984213    8683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:28:18.990638    8683 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:28:18.990652    8683 start.go:495] detecting cgroup driver to use...
	I0731 12:28:18.990726    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:19.000946    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:28:19.005324    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:28:19.008232    8683 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:28:19.008275    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:28:19.011559    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:19.015021    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:28:19.018392    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:19.030152    8683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:28:19.033245    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:28:19.036431    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:28:19.039750    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:28:19.044514    8683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:28:19.051732    8683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:28:19.057011    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:19.199556    8683 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:28:19.213364    8683 start.go:495] detecting cgroup driver to use...
	I0731 12:28:19.213436    8683 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:28:19.218646    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:19.223393    8683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:28:19.230437    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:19.234881    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:28:19.239442    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:19.245087    8683 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:28:19.246422    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:28:19.249047    8683 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:28:19.253889    8683 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:28:19.357461    8683 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:28:19.468103    8683 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:28:19.468162    8683 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:28:19.473475    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:19.568834    8683 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:27.392013    8683 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.823257834s)
	I0731 12:28:27.392147    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:28:27.397858    8683 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 12:28:27.406836    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:27.412244    8683 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:28:27.503759    8683 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:28:27.597822    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:27.667839    8683 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:28:27.674051    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:27.678845    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:27.754764    8683 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:28:27.798555    8683 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:28:27.798630    8683 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:28:27.802562    8683 start.go:563] Will wait 60s for crictl version
	I0731 12:28:27.802618    8683 ssh_runner.go:195] Run: which crictl
	I0731 12:28:27.804794    8683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:28:27.819212    8683 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:28:27.819271    8683 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:27.832708    8683 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:27.850612    8683 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:28:27.850671    8683 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:28:27.851986    8683 kubeadm.go:883] updating cluster {Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:28:27.852039    8683 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:28:27.852078    8683 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:27.863237    8683 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:27.863246    8683 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:27.863289    8683 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:27.866384    8683 ssh_runner.go:195] Run: which lz4
	I0731 12:28:27.867608    8683 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 12:28:27.869056    8683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:28:27.869070    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:28:28.841132    8683 docker.go:649] duration metric: took 973.565125ms to copy over tarball
	I0731 12:28:28.841195    8683 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:28:30.265991    8683 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.424800042s)
	I0731 12:28:30.266007    8683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:28:30.282834    8683 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:30.286826    8683 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:28:30.292715    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:30.370133    8683 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:31.569178    8683 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.199045667s)
	I0731 12:28:31.569275    8683 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:31.588720    8683 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:31.588730    8683 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:31.588736    8683 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:28:31.592706    8683 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:31.594565    8683 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:31.596916    8683 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:28:31.597092    8683 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:31.600260    8683 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:31.600281    8683 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:31.602752    8683 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:28:31.603066    8683 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:31.604581    8683 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:31.604636    8683 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:31.606400    8683 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:31.606515    8683 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:31.607376    8683 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:31.607605    8683 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:31.609138    8683 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:31.609677    8683 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:31.987863    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:28:32.000665    8683 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:28:32.000690    8683 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:28:32.000741    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:28:32.012135    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:28:32.012240    8683 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 12:28:32.013809    8683 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:28:32.013820    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:28:32.019511    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:32.023594    8683 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:28:32.023611    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0731 12:28:32.025293    8683 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:32.025423    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:32.025598    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:32.038928    8683 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:28:32.038951    8683 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:32.039003    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:32.062215    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:32.065618    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:32.077157    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:32.082149    8683 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:28:32.082204    8683 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:28:32.082222    8683 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:32.082257    8683 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:28:32.082268    8683 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:32.082274    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:32.082296    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:32.082337    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 12:28:32.082361    8683 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:28:32.082370    8683 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:32.082392    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:32.093734    8683 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:28:32.093755    8683 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:32.093804    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:32.116972    8683 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:28:32.116994    8683 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:32.117049    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:32.122581    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:28:32.122594    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:28:32.122632    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:28:32.122675    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:28:32.122689    8683 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:32.122727    8683 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:32.132036    8683 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:28:32.132050    8683 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:28:32.132065    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:28:32.132064    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:28:32.132117    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0731 12:28:32.187028    8683 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:32.187132    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:32.224165    8683 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:32.224181    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:28:32.227264    8683 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:28:32.227293    8683 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:32.227348    8683 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:32.329717    8683 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:28:32.465592    8683 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:32.465605    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:28:32.611135    8683 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:28:32.611180    8683 cache_images.go:92] duration metric: took 1.022453583s to LoadCachedImages
	W0731 12:28:32.611222    8683 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0731 12:28:32.611228    8683 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:28:32.611279    8683 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-568000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:28:32.611347    8683 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:28:32.625459    8683 cni.go:84] Creating CNI manager for ""
	I0731 12:28:32.625470    8683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:28:32.625475    8683 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:28:32.625483    8683 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-568000 NodeName:running-upgrade-568000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:28:32.625542    8683 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-568000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:28:32.625604    8683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:28:32.629498    8683 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:28:32.629530    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:28:32.632332    8683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:28:32.637688    8683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:28:32.642724    8683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:28:32.648584    8683 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:28:32.649983    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:32.734653    8683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:28:32.740781    8683 certs.go:68] Setting up /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000 for IP: 10.0.2.15
	I0731 12:28:32.740788    8683 certs.go:194] generating shared ca certs ...
	I0731 12:28:32.740796    8683 certs.go:226] acquiring lock for ca certs: {Name:mk2e60bc5d1dd01990778560005f880e3d93cfec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:32.740937    8683 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key
	I0731 12:28:32.740972    8683 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key
	I0731 12:28:32.740979    8683 certs.go:256] generating profile certs ...
	I0731 12:28:32.741052    8683 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.key
	I0731 12:28:32.741067    8683 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092
	I0731 12:28:32.741084    8683 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:28:32.928997    8683 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 ...
	I0731 12:28:32.929012    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092: {Name:mkdb24f8131ee81d433f06e0864d95e66ab19f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:32.929586    8683 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092 ...
	I0731 12:28:32.929595    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092: {Name:mk150864cffac15216489bfedc4872743595342f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:32.929757    8683 certs.go:381] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt
	I0731 12:28:32.929894    8683 certs.go:385] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key
	I0731 12:28:32.930053    8683 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/proxy-client.key
	I0731 12:28:32.930188    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem (1338 bytes)
	W0731 12:28:32.930212    8683 certs.go:480] ignoring /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068_empty.pem, impossibly tiny 0 bytes
	I0731 12:28:32.930217    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 12:28:32.930237    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem (1078 bytes)
	I0731 12:28:32.930255    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:28:32.930274    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem (1679 bytes)
	I0731 12:28:32.930312    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:32.930626    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:28:32.938414    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:28:32.947670    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:28:32.955225    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:28:32.962300    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:28:32.968757    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 12:28:32.975876    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:28:32.983354    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:28:32.990559    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /usr/share/ca-certificates/70682.pem (1708 bytes)
	I0731 12:28:32.997262    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:28:33.004607    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem --> /usr/share/ca-certificates/7068.pem (1338 bytes)
	I0731 12:28:33.011739    8683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:28:33.016858    8683 ssh_runner.go:195] Run: openssl version
	I0731 12:28:33.018658    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7068.pem && ln -fs /usr/share/ca-certificates/7068.pem /etc/ssl/certs/7068.pem"
	I0731 12:28:33.021877    8683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7068.pem
	I0731 12:28:33.023570    8683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:16 /usr/share/ca-certificates/7068.pem
	I0731 12:28:33.023592    8683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7068.pem
	I0731 12:28:33.025396    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7068.pem /etc/ssl/certs/51391683.0"
	I0731 12:28:33.028685    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70682.pem && ln -fs /usr/share/ca-certificates/70682.pem /etc/ssl/certs/70682.pem"
	I0731 12:28:33.032400    8683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70682.pem
	I0731 12:28:33.034148    8683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:16 /usr/share/ca-certificates/70682.pem
	I0731 12:28:33.034170    8683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70682.pem
	I0731 12:28:33.036274    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70682.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:28:33.039412    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:28:33.042752    8683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:33.044525    8683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:27 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:33.044547    8683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:33.046769    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:28:33.049793    8683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:28:33.051723    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:28:33.053630    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:28:33.055708    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:28:33.058043    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:28:33.060596    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:28:33.062569    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:28:33.064602    8683 kubeadm.go:392] StartCluster: {Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:33.064679    8683 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:33.075245    8683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:28:33.078649    8683 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:28:33.078654    8683 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:28:33.078676    8683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:28:33.082027    8683 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.082324    8683 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-568000" does not appear in /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:28:33.082426    8683 kubeconfig.go:62] /Users/jenkins/minikube-integration/19360-6578/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-568000" cluster setting kubeconfig missing "running-upgrade-568000" context setting]
	I0731 12:28:33.082638    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:33.083035    8683 kapi.go:59] client config for running-upgrade-568000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b981b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:28:33.083396    8683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:28:33.086166    8683 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-568000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:28:33.086172    8683 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:28:33.086210    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:33.098415    8683 docker.go:483] Stopping containers: [1a04823f282c 89ccd9d65c44 5f0265d3c82c 5907695a856e 79af8db7b93f 48a551feeb69 765e46f6d6d5 dc5bd8e47595 ee0d0084b71f e8583e731678 77dcff6a0e07 e35e0efca313 c06d364f5fbd 627669f4b423 204324f27a33 6915e8ffd332 ecf03366161d 4f6055948b7a 294c61dc30d9 886c7a3e1e99 53cd6358decf 538cb5ae476c]
	I0731 12:28:33.098485    8683 ssh_runner.go:195] Run: docker stop 1a04823f282c 89ccd9d65c44 5f0265d3c82c 5907695a856e 79af8db7b93f 48a551feeb69 765e46f6d6d5 dc5bd8e47595 ee0d0084b71f e8583e731678 77dcff6a0e07 e35e0efca313 c06d364f5fbd 627669f4b423 204324f27a33 6915e8ffd332 ecf03366161d 4f6055948b7a 294c61dc30d9 886c7a3e1e99 53cd6358decf 538cb5ae476c
	I0731 12:28:33.110433    8683 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:28:33.202790    8683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:28:33.206986    8683 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 31 19:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 31 19:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 31 19:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 31 19:27 /etc/kubernetes/scheduler.conf
	
	I0731 12:28:33.207018    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf
	I0731 12:28:33.210565    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.210586    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:28:33.214859    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf
	I0731 12:28:33.218180    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.218212    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:28:33.221303    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf
	I0731 12:28:33.223918    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.223944    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:28:33.226495    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf
	I0731 12:28:33.229717    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.229741    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:28:33.232538    8683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:28:33.235565    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:33.278789    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:33.778031    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:34.009662    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:34.036046    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:34.058569    8683 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:28:34.058644    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:34.560714    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:35.060410    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:35.560786    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:36.059463    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:36.560991    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:37.058873    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:37.560686    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:37.565231    8683 api_server.go:72] duration metric: took 3.506721708s to wait for apiserver process to appear ...
	I0731 12:28:37.565243    8683 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:28:37.565252    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:42.567226    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:42.567238    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:47.567296    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:47.567318    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:52.567455    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:52.567473    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:57.567666    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:57.567748    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:02.568193    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:02.568237    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:07.569350    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:07.569384    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:12.570767    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:12.570792    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:17.571848    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:17.571902    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:22.573374    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:22.573408    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:27.574481    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:27.574533    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:32.576733    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:32.576778    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:37.578925    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:37.579104    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:37.595118    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:29:37.595195    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:37.608107    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:29:37.608183    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:37.619409    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:29:37.619484    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:37.629921    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:29:37.629988    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:37.640232    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:29:37.640309    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:37.650940    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:29:37.651010    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:37.660887    8683 logs.go:276] 0 containers: []
	W0731 12:29:37.660901    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:37.660960    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:37.671782    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:29:37.671801    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:29:37.671808    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:29:37.692342    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:29:37.692353    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:29:37.703556    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:29:37.703567    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:29:37.715882    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:29:37.715892    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:29:37.727302    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:37.727313    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:37.766791    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:37.766799    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:37.771591    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:37.771599    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:37.872374    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:29:37.872388    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:29:37.884847    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:29:37.884858    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:29:37.902726    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:29:37.902734    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:29:37.914987    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:37.914997    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:37.942344    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:29:37.942352    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:29:37.955948    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:29:37.955958    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:29:37.982163    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:29:37.982173    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:29:37.997076    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:29:37.997086    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:29:38.009083    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:29:38.009094    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:29:38.028475    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:29:38.028491    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:29:38.041482    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:29:38.041492    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:29:38.053295    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:29:38.053307    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:40.566338    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:45.566663    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:45.566876    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:45.592654    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:29:45.592766    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:45.608910    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:29:45.608999    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:45.625614    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:29:45.625685    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:45.636129    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:29:45.636199    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:45.646875    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:29:45.646977    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:45.657341    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:29:45.657413    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:45.668256    8683 logs.go:276] 0 containers: []
	W0731 12:29:45.668268    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:45.668328    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:45.678846    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:29:45.678864    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:29:45.678869    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:29:45.696571    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:29:45.696581    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:29:45.709313    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:45.709324    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:45.750659    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:45.750668    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:45.755641    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:29:45.755649    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:29:45.773727    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:29:45.773740    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:29:45.784882    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:29:45.784893    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:29:45.797063    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:29:45.797075    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:29:45.809465    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:45.809478    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:45.835913    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:45.835920    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:45.874641    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:29:45.874655    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:29:45.900609    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:29:45.900621    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:29:45.914855    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:29:45.914868    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:29:45.926482    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:29:45.926505    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:29:45.938311    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:29:45.938322    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:29:45.949544    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:29:45.949554    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:45.962031    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:29:45.962047    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:29:45.975607    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:29:45.975619    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:29:45.987617    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:29:45.987627    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:29:48.514865    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:53.517481    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:53.517951    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:53.567277    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:29:53.567408    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:53.593729    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:29:53.593821    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:53.606030    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:29:53.606107    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:53.622493    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:29:53.622577    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:53.633631    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:29:53.633701    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:53.644375    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:29:53.644450    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:53.655010    8683 logs.go:276] 0 containers: []
	W0731 12:29:53.655021    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:53.655078    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:53.666390    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:29:53.666406    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:53.666412    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:53.671244    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:29:53.671252    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:29:53.683979    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:53.683992    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:53.723566    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:29:53.723575    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:29:53.735533    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:53.735543    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:53.762285    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:29:53.762295    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:29:53.787858    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:29:53.787870    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:29:53.802588    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:29:53.802604    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:29:53.814410    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:29:53.814421    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:29:53.833453    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:29:53.833465    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:29:53.850361    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:29:53.850373    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:29:53.862178    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:29:53.862188    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:53.874381    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:29:53.874392    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:29:53.888215    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:29:53.888227    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:29:53.901963    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:29:53.901973    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:29:53.913538    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:29:53.913550    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:29:53.934001    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:29:53.934015    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:29:53.946302    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:29:53.946316    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:29:53.957708    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:53.957719    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:56.498440    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:01.500891    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:01.501054    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:01.518612    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:01.518704    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:01.533659    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:01.533728    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:01.544362    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:01.544431    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:01.555043    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:01.555112    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:01.573142    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:01.573209    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:01.583117    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:01.583190    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:01.594587    8683 logs.go:276] 0 containers: []
	W0731 12:30:01.594599    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:01.594655    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:01.605327    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:01.605343    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:01.605349    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:01.616810    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:01.616823    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:01.660486    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:01.660498    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:01.666248    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:01.666259    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:01.682167    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:01.682185    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:01.697737    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:01.697747    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:01.715001    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:01.715014    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:01.726940    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:01.726950    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:01.744313    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:01.744323    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:01.756496    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:01.756507    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:01.768352    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:01.768363    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:01.779532    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:01.779542    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:01.813743    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:01.813753    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:01.839188    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:01.839199    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:01.853853    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:01.853863    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:01.872290    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:01.872302    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:01.897863    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:01.897873    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:01.912655    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:01.912665    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:01.924363    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:01.924373    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:04.438310    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:09.440551    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:09.440704    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:09.462902    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:09.462980    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:09.474889    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:09.474964    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:09.485271    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:09.485341    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:09.495790    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:09.495856    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:09.506251    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:09.506316    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:09.517066    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:09.517135    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:09.527888    8683 logs.go:276] 0 containers: []
	W0731 12:30:09.527899    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:09.527954    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:09.538761    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:09.538778    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:09.538785    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:09.578262    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:09.578272    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:09.592203    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:09.592215    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:09.603763    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:09.603774    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:09.615935    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:09.615945    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:09.627972    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:09.627982    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:09.639538    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:09.639555    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:09.665091    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:09.665097    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:09.669328    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:09.669337    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:09.683526    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:09.683536    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:09.708235    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:09.708246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:09.730113    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:09.730123    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:09.747847    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:09.747861    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:09.759693    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:09.759705    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:09.782355    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:09.782369    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:09.793644    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:09.793659    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:09.828760    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:09.828776    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:09.840245    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:09.840256    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:09.851845    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:09.851858    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:12.370042    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:17.372421    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:17.372729    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:17.405521    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:17.405651    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:17.426033    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:17.426115    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:17.442559    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:17.442635    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:17.453921    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:17.453989    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:17.464686    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:17.464755    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:17.475652    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:17.475721    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:17.485973    8683 logs.go:276] 0 containers: []
	W0731 12:30:17.485983    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:17.486045    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:17.501529    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:17.501544    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:17.501550    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:17.515933    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:17.515947    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:17.528545    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:17.528558    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:17.541515    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:17.541528    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:17.566204    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:17.566214    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:17.601270    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:17.601283    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:17.612523    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:17.612533    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:17.623543    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:17.623554    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:17.641261    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:17.641272    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:17.666075    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:17.666086    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:17.685866    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:17.685876    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:17.697979    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:17.697990    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:17.709755    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:17.709766    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:17.751189    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:17.751207    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:17.758458    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:17.758469    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:17.772758    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:17.772769    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:17.784498    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:17.784509    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:17.801483    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:17.801493    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:17.813246    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:17.813259    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:20.328338    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:25.330703    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:25.331012    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:25.362240    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:25.362373    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:25.380040    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:25.380141    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:25.399544    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:25.399627    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:25.412441    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:25.412518    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:25.423303    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:25.423369    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:25.434346    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:25.434418    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:25.445202    8683 logs.go:276] 0 containers: []
	W0731 12:30:25.445215    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:25.445284    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:25.455849    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:25.455865    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:25.455870    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:25.469894    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:25.469908    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:25.509665    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:25.509674    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:25.523213    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:25.523228    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:25.527651    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:25.527657    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:25.552276    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:25.552289    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:25.565547    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:25.565557    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:25.582566    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:25.582576    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:25.594205    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:25.594216    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:25.605704    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:25.605715    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:25.617985    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:25.617994    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:25.629361    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:25.629373    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:25.671519    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:25.671529    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:25.685989    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:25.685999    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:25.697442    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:25.697455    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:25.709554    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:25.709564    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:25.721654    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:25.721664    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:25.739756    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:25.739768    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:25.751548    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:25.751556    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:28.279930    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:33.282310    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:33.282454    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:33.301282    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:33.301353    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:33.312385    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:33.312458    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:33.322580    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:33.322653    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:33.336877    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:33.336943    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:33.350479    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:33.350538    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:33.363300    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:33.363367    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:33.373860    8683 logs.go:276] 0 containers: []
	W0731 12:30:33.373872    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:33.373932    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:33.384875    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:33.384888    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:33.384893    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:33.409677    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:33.409689    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:33.422362    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:33.422376    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:33.438808    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:33.438818    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:33.450083    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:33.450092    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:33.461001    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:33.461012    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:33.486514    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:33.486522    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:33.527990    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:33.528007    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:33.542434    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:33.542446    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:33.556839    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:33.556852    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:33.576593    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:33.576604    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:33.612273    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:33.612286    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:33.629113    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:33.629124    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:33.640757    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:33.640767    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:33.645017    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:33.645024    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:33.658741    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:33.658752    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:33.675255    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:33.675267    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:33.687437    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:33.687450    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:33.706006    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:33.706017    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:36.220854    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:41.223063    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:41.223234    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:41.241186    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:41.241273    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:41.254319    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:41.254396    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:41.265507    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:41.265576    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:41.276668    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:41.276736    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:41.287723    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:41.287789    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:41.298740    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:41.298805    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:41.309019    8683 logs.go:276] 0 containers: []
	W0731 12:30:41.309031    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:41.309096    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:41.318900    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:41.318917    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:41.318923    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:41.330528    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:41.330538    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:41.345228    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:41.345242    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:41.369533    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:41.369543    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:41.387653    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:41.387667    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:41.399614    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:41.399625    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:41.411092    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:41.411106    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:41.422897    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:41.422908    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:41.459915    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:41.459926    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:41.474226    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:41.474236    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:41.486469    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:41.486479    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:41.500460    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:41.500472    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:41.515478    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:41.515490    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:41.540876    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:41.540893    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:41.575919    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:41.575934    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:41.588100    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:41.588114    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:41.629561    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:41.629569    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:41.633871    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:41.633877    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:41.651466    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:41.651481    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:44.165932    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:49.168330    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:49.168735    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:49.202691    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:49.202828    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:49.221604    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:49.221708    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:49.237247    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:49.237327    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:49.249752    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:49.249829    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:49.260900    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:49.260961    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:49.271477    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:49.271541    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:49.282143    8683 logs.go:276] 0 containers: []
	W0731 12:30:49.282156    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:49.282219    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:49.292606    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:49.292621    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:49.292626    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:49.328229    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:49.328240    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:49.343858    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:49.343871    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:49.355215    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:49.355227    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:49.373058    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:49.373072    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:49.385861    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:49.385874    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:49.398478    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:49.398490    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:49.416001    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:49.416013    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:49.427543    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:49.427557    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:49.432140    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:49.432146    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:49.457260    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:49.457270    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:49.471204    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:49.471214    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:49.490010    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:49.490020    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:49.501795    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:49.501806    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:49.512980    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:49.512992    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:49.525939    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:49.525950    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:49.537125    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:49.537139    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:49.562565    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:49.562575    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:49.604055    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:49.604065    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:52.121680    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:57.123985    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:57.124292    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:57.160649    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:57.160782    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:57.181486    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:57.181572    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:57.196331    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:57.196417    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:57.208273    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:57.208348    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:57.219383    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:57.219455    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:57.231620    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:57.231693    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:57.245188    8683 logs.go:276] 0 containers: []
	W0731 12:30:57.245199    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:57.245261    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:57.256019    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:57.256034    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:57.256040    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:57.268197    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:57.268207    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:57.279752    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:57.279764    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:57.292896    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:57.292907    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:57.305087    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:57.305097    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:57.343489    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:57.343501    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:57.379889    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:57.379909    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:57.409448    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:57.409460    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:57.448798    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:57.448808    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:57.466664    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:57.466675    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:57.481310    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:57.481322    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:57.495996    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:57.496018    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:57.507689    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:57.507701    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:57.519084    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:57.519095    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:57.531401    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:57.531413    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:57.536082    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:57.536088    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:57.561644    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:57.561657    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:57.581300    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:57.581313    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:57.592740    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:57.592751    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:00.106303    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:05.108931    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:05.109427    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:05.147908    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:05.148048    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:05.168242    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:05.168349    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:05.183900    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:05.183988    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:05.196509    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:05.196587    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:05.208384    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:05.208458    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:05.219377    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:05.219446    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:05.230020    8683 logs.go:276] 0 containers: []
	W0731 12:31:05.230031    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:05.230090    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:05.241313    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:05.241329    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:05.241333    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:05.253062    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:05.253076    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:05.277516    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:05.277527    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:05.318990    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:05.319000    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:05.355536    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:05.355547    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:05.372747    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:05.372760    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:05.384579    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:05.384592    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:05.395932    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:05.395944    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:05.407820    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:05.407830    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:05.426809    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:05.426818    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:05.439879    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:05.439891    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:05.451207    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:05.451217    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:05.463324    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:05.463336    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:05.474997    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:05.475007    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:05.479571    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:05.479580    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:05.493854    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:05.493864    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:05.507741    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:05.507751    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:05.522047    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:05.522059    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:05.547798    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:05.547810    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:08.063797    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:13.065001    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:13.065409    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:13.112019    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:13.112154    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:13.132886    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:13.133006    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:13.152599    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:13.152678    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:13.164418    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:13.164497    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:13.177309    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:13.177393    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:13.188297    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:13.188373    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:13.199202    8683 logs.go:276] 0 containers: []
	W0731 12:31:13.199214    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:13.199270    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:13.210260    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:13.210274    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:13.210279    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:13.224884    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:13.224894    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:13.240235    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:13.240246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:13.251815    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:13.251827    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:13.270860    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:13.270872    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:13.296926    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:13.296937    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:13.308973    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:13.308984    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:13.321109    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:13.321119    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:13.333227    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:13.333239    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:13.345156    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:13.345166    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:13.356627    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:13.356637    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:13.376663    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:13.376673    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:13.419057    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:13.419064    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:13.423640    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:13.423649    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:13.460083    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:13.460093    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:13.484047    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:13.484057    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:13.502349    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:13.502359    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:13.520737    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:13.520750    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:13.546687    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:13.546702    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:16.063920    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:21.064724    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:21.064889    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:21.077872    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:21.077949    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:21.089735    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:21.089805    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:21.100028    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:21.100100    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:21.110764    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:21.110836    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:21.121455    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:21.121527    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:21.132579    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:21.132649    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:21.142791    8683 logs.go:276] 0 containers: []
	W0731 12:31:21.142806    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:21.142864    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:21.153716    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:21.153732    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:21.153737    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:21.158738    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:21.158746    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:21.194266    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:21.194278    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:21.219485    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:21.219496    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:21.231893    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:21.231904    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:21.244793    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:21.244807    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:21.262497    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:21.262507    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:21.280149    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:21.280160    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:21.321108    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:21.321115    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:21.347117    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:21.347128    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:21.361927    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:21.361938    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:21.373283    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:21.373295    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:21.384489    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:21.384498    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:21.395344    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:21.395354    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:21.414032    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:21.414041    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:21.432271    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:21.432281    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:21.450015    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:21.450026    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:21.461630    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:21.461641    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:21.473312    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:21.473323    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:24.000639    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:29.003085    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:29.003540    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:29.043520    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:29.043665    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:29.067009    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:29.067122    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:29.084097    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:29.084175    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:29.096233    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:29.096309    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:29.108830    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:29.108905    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:29.119765    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:29.119832    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:29.130602    8683 logs.go:276] 0 containers: []
	W0731 12:31:29.130612    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:29.130670    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:29.141952    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:29.141966    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:29.141971    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:29.183892    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:29.183899    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:29.187857    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:29.187864    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:29.199328    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:29.199341    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:29.212045    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:29.212057    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:29.230964    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:29.230974    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:29.242859    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:29.242871    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:29.257086    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:29.257100    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:29.268885    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:29.268896    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:29.280356    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:29.280367    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:29.297357    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:29.297368    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:29.309402    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:29.309414    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:29.322587    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:29.322599    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:29.346126    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:29.346134    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:29.382007    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:29.382018    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:29.396715    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:29.396725    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:29.420918    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:29.420929    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:29.434994    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:29.435005    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:29.452549    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:29.452560    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:31.966117    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:36.968412    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:36.968652    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:36.993202    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:36.993344    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:37.012124    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:37.012199    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:37.024392    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:37.024467    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:37.035581    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:37.035655    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:37.046182    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:37.046252    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:37.056806    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:37.056871    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:37.072536    8683 logs.go:276] 0 containers: []
	W0731 12:31:37.072547    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:37.072602    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:37.082792    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:37.082809    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:37.082814    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:37.096500    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:37.096510    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:37.113881    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:37.113892    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:37.125789    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:37.125800    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:37.140256    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:37.140265    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:37.151073    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:37.151082    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:37.167781    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:37.167793    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:37.181193    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:37.181205    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:37.218283    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:37.218293    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:37.230273    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:37.230283    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:37.254964    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:37.254971    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:37.278952    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:37.278962    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:37.290929    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:37.290941    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:37.302717    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:37.302728    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:37.316006    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:37.316020    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:37.355139    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:37.355146    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:37.359301    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:37.359306    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:37.373093    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:37.373102    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:37.387509    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:37.387522    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:39.908750    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:44.911328    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:44.911562    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:44.943489    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:44.943616    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:44.958807    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:44.958906    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:44.971180    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:44.971258    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:44.981997    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:44.982068    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:44.992521    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:44.992606    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:45.003238    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:45.003321    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:45.013746    8683 logs.go:276] 0 containers: []
	W0731 12:31:45.013759    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:45.013821    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:45.025115    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:45.025131    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:45.025136    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:45.037924    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:45.037940    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:45.052117    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:45.052132    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:45.066803    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:45.066817    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:45.078529    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:45.078543    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:45.096814    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:45.096825    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:45.108471    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:45.108485    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:45.113444    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:45.113451    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:45.124815    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:45.124823    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:45.142133    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:45.142147    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:45.153766    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:45.153776    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:45.194338    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:45.194346    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:45.218660    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:45.218671    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:45.242338    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:45.242348    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:45.253787    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:45.253801    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:45.290267    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:45.290277    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:45.304680    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:45.304690    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:45.316930    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:45.316941    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:45.335351    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:45.335361    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:47.849230    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:52.851652    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:52.852021    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:52.883359    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:52.883489    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:52.900739    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:52.900835    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:52.914777    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:52.914859    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:52.931224    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:52.931298    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:52.941249    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:52.941320    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:52.952297    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:52.952369    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:52.962869    8683 logs.go:276] 0 containers: []
	W0731 12:31:52.962879    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:52.962937    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:52.974100    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:52.974115    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:52.974120    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:52.993013    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:52.993023    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:53.004676    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:53.004689    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:53.016725    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:53.016734    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:53.032980    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:53.032991    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:53.052074    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:53.052082    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:53.063638    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:53.063655    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:53.075630    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:53.075640    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:53.086769    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:53.086779    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:53.111308    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:53.111316    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:53.123990    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:53.124002    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:53.148886    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:53.148896    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:53.160135    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:53.160148    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:53.177662    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:53.177675    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:53.191696    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:53.191707    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:53.205433    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:53.205445    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:53.216992    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:53.217004    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:53.256982    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:53.256992    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:53.261830    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:53.261837    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:55.797990    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:00.799323    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:00.799504    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:00.812379    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:00.812461    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:00.826344    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:00.826413    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:00.837077    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:00.837151    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:00.848019    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:00.848094    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:00.859167    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:00.859236    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:00.869911    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:00.870005    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:00.880244    8683 logs.go:276] 0 containers: []
	W0731 12:32:00.880255    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:00.880316    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:00.890778    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:00.890800    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:00.890804    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:00.904490    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:00.904501    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:00.916111    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:00.916122    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:00.927301    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:00.927311    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:00.945271    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:00.945281    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:00.956791    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:00.956802    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:00.979894    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:00.979900    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:01.019553    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:01.019563    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:01.080016    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:01.080026    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:01.105078    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:01.105088    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:01.122588    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:01.122603    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:01.134698    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:01.134709    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:01.155362    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:01.155372    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:01.169914    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:01.169926    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:01.183023    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:01.183034    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:01.187345    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:01.187356    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:01.199786    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:01.199796    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:01.211767    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:01.211781    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:01.223694    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:01.223707    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:03.738131    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:08.738738    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:08.738928    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:08.754029    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:08.754117    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:08.766027    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:08.766101    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:08.780098    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:08.780164    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:08.790512    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:08.790583    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:08.801100    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:08.801170    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:08.811623    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:08.811701    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:08.821893    8683 logs.go:276] 0 containers: []
	W0731 12:32:08.821905    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:08.821962    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:08.836506    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:08.836521    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:08.836525    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:08.854274    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:08.854288    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:08.858820    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:08.858829    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:08.882545    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:08.882556    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:08.897543    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:08.897553    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:08.915799    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:08.915809    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:08.927802    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:08.927812    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:08.941357    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:08.941365    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:08.956597    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:08.956607    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:08.978247    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:08.978257    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:09.017219    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:09.017230    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:09.030916    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:09.030926    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:09.054576    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:09.054585    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:09.066280    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:09.066290    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:09.081242    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:09.081254    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:09.092662    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:09.092675    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:09.104928    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:09.104941    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:09.139876    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:09.139886    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:09.151250    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:09.151261    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:11.664995    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:16.667373    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:16.667824    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:16.699731    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:16.699866    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:16.718403    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:16.718502    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:16.732646    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:16.732729    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:16.744123    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:16.744193    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:16.754778    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:16.754851    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:16.765369    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:16.765446    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:16.775749    8683 logs.go:276] 0 containers: []
	W0731 12:32:16.775760    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:16.775817    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:16.786779    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:16.786796    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:16.786801    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:16.798655    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:16.798669    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:16.816011    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:16.816027    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:16.828503    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:16.828516    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:16.840833    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:16.840845    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:16.853778    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:16.853789    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:16.891943    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:16.891952    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:16.906448    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:16.906461    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:16.918704    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:16.918715    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:16.943890    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:16.943903    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:16.955938    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:16.955949    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:16.973515    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:16.973527    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:17.013131    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:17.013138    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:17.017938    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:17.017948    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:17.042787    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:17.042799    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:17.065975    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:17.065982    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:17.078957    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:17.078972    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:17.092737    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:17.092747    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:17.104369    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:17.104381    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:19.616461    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:24.618818    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:24.618986    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:24.636393    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:24.636489    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:24.651875    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:24.651949    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:24.663291    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:24.663363    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:24.674616    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:24.674683    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:24.685419    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:24.685478    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:24.695957    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:24.696025    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:24.711399    8683 logs.go:276] 0 containers: []
	W0731 12:32:24.711411    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:24.711472    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:24.721770    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:24.721785    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:24.721790    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:24.733729    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:24.733740    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:24.752495    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:24.752505    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:24.770354    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:24.770365    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:24.781632    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:24.781642    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:24.793792    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:24.793801    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:24.804960    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:24.804971    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:24.846051    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:24.846074    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:24.860227    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:24.860245    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:24.872233    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:24.872246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:24.883996    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:24.884011    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:24.896280    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:24.896291    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:24.918126    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:24.918133    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:24.922481    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:24.922490    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:24.956704    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:24.956717    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:24.972585    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:24.972598    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:24.997424    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:24.997436    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:25.011726    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:25.011740    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:25.023176    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:25.023191    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:27.537713    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:32.539832    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:32.539947    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:32.551681    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:32.551782    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:32.569454    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:32.569534    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:32.585072    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:32.585150    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:32.596716    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:32.596782    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:32.608736    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:32.608821    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:32.620639    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:32.620710    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:32.632423    8683 logs.go:276] 0 containers: []
	W0731 12:32:32.632433    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:32.632499    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:32.644297    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:32.644311    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:32.644316    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:32.657550    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:32.657561    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:32.670988    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:32.670999    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:32.695518    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:32.695546    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:32.708554    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:32.708568    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:32.713386    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:32.713396    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:32.741413    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:32.741432    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:32.761540    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:32.761557    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:32.781031    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:32.781047    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:32.823817    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:32.823839    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:32.841720    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:32.841732    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:32.855567    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:32.855579    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:32.867982    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:32.867995    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:32.882377    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:32.882388    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:32.894487    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:32.894498    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:32.907588    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:32.907603    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:32.947184    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:32.947196    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:32.962796    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:32.962809    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:32.974972    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:32.974986    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:35.489306    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:40.491624    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:40.491684    8683 kubeadm.go:597] duration metric: took 4m7.416969666s to restartPrimaryControlPlane
	W0731 12:32:40.491730    8683 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:32:40.491748    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:32:41.561905    8683 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.070150083s)
	I0731 12:32:41.561978    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:32:41.567212    8683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:32:41.570095    8683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:32:41.572966    8683 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:32:41.572972    8683 kubeadm.go:157] found existing configuration files:
	
	I0731 12:32:41.572993    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf
	I0731 12:32:41.575693    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:32:41.575720    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:32:41.578136    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf
	I0731 12:32:41.580803    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:32:41.580820    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:32:41.583762    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf
	I0731 12:32:41.586234    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:32:41.586256    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:32:41.589301    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf
	I0731 12:32:41.592411    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:32:41.592433    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:32:41.595154    8683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:32:41.612559    8683 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:32:41.612585    8683 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:32:41.659756    8683 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:32:41.659809    8683 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:32:41.659861    8683 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:32:41.710931    8683 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:32:41.715117    8683 out.go:204]   - Generating certificates and keys ...
	I0731 12:32:41.715155    8683 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:32:41.715203    8683 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:32:41.715287    8683 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:32:41.715373    8683 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:32:41.715467    8683 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:32:41.715556    8683 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:32:41.715642    8683 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:32:41.715678    8683 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:32:41.715738    8683 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:32:41.715802    8683 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:32:41.715823    8683 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:32:41.715861    8683 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:32:41.856521    8683 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:32:41.904704    8683 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:32:42.292508    8683 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:32:42.519789    8683 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:32:42.551053    8683 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:32:42.551404    8683 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:32:42.551479    8683 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:32:42.642655    8683 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:32:42.647192    8683 out.go:204]   - Booting up control plane ...
	I0731 12:32:42.647239    8683 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:32:42.647275    8683 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:32:42.647315    8683 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:32:42.647359    8683 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:32:42.647596    8683 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:32:47.145013    8683 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502491 seconds
	I0731 12:32:47.145074    8683 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:32:47.148628    8683 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:32:47.657415    8683 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:32:47.657586    8683 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-568000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:32:48.161561    8683 kubeadm.go:310] [bootstrap-token] Using token: q9milu.92yi4hukjtyyvv5w
	I0731 12:32:48.167530    8683 out.go:204]   - Configuring RBAC rules ...
	I0731 12:32:48.167599    8683 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:32:48.167643    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:32:48.171991    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:32:48.172867    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:32:48.173765    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:32:48.174499    8683 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:32:48.177809    8683 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:32:48.329841    8683 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:32:48.567846    8683 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:32:48.568401    8683 kubeadm.go:310] 
	I0731 12:32:48.568432    8683 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:32:48.568436    8683 kubeadm.go:310] 
	I0731 12:32:48.568473    8683 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:32:48.568479    8683 kubeadm.go:310] 
	I0731 12:32:48.568491    8683 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:32:48.568522    8683 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:32:48.568550    8683 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:32:48.568553    8683 kubeadm.go:310] 
	I0731 12:32:48.568579    8683 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:32:48.568582    8683 kubeadm.go:310] 
	I0731 12:32:48.568606    8683 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:32:48.568611    8683 kubeadm.go:310] 
	I0731 12:32:48.568643    8683 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:32:48.568683    8683 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:32:48.568730    8683 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:32:48.568735    8683 kubeadm.go:310] 
	I0731 12:32:48.568780    8683 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:32:48.568815    8683 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:32:48.568818    8683 kubeadm.go:310] 
	I0731 12:32:48.568863    8683 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q9milu.92yi4hukjtyyvv5w \
	I0731 12:32:48.568918    8683 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f \
	I0731 12:32:48.568930    8683 kubeadm.go:310] 	--control-plane 
	I0731 12:32:48.568932    8683 kubeadm.go:310] 
	I0731 12:32:48.568973    8683 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:32:48.568978    8683 kubeadm.go:310] 
	I0731 12:32:48.569018    8683 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q9milu.92yi4hukjtyyvv5w \
	I0731 12:32:48.569072    8683 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f 
	I0731 12:32:48.569128    8683 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:32:48.569173    8683 cni.go:84] Creating CNI manager for ""
	I0731 12:32:48.569181    8683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:32:48.573643    8683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:32:48.580668    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:32:48.583959    8683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:32:48.590496    8683 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:32:48.590586    8683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-568000 minikube.k8s.io/updated_at=2024_07_31T12_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=running-upgrade-568000 minikube.k8s.io/primary=true
	I0731 12:32:48.590658    8683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:32:48.632927    8683 ops.go:34] apiserver oom_adj: -16
	I0731 12:32:48.632947    8683 kubeadm.go:1113] duration metric: took 42.383625ms to wait for elevateKubeSystemPrivileges
	I0731 12:32:48.633042    8683 kubeadm.go:394] duration metric: took 4m15.572514958s to StartCluster
	I0731 12:32:48.633054    8683 settings.go:142] acquiring lock: {Name:mk262cff1bf9355aa6c0530bb5de14a2847090f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:48.633131    8683 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:32:48.633515    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:48.633698    8683 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:32:48.633829    8683 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:32:48.633819    8683 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:32:48.633885    8683 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-568000"
	I0731 12:32:48.633895    8683 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-568000"
	W0731 12:32:48.633899    8683 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:32:48.633901    8683 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-568000"
	I0731 12:32:48.633911    8683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-568000"
	I0731 12:32:48.633916    8683 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0731 12:32:48.634883    8683 kapi.go:59] client config for running-upgrade-568000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b981b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:32:48.635007    8683 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-568000"
	W0731 12:32:48.635012    8683 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:32:48.635019    8683 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0731 12:32:48.637618    8683 out.go:177] * Verifying Kubernetes components...
	I0731 12:32:48.637991    8683 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:48.640856    8683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:32:48.640865    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:32:48.644504    8683 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:32:48.648615    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:32:48.652496    8683 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:48.652503    8683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:32:48.652509    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:32:48.728337    8683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:32:48.733427    8683 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:32:48.733465    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:32:48.737501    8683 api_server.go:72] duration metric: took 103.793042ms to wait for apiserver process to appear ...
	I0731 12:32:48.737508    8683 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:32:48.737514    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:48.760499    8683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:48.784150    8683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:53.738741    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:53.738778    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:58.739473    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:58.739542    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:03.739785    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:03.739814    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:08.740095    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:08.740155    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:13.740694    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:13.740715    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:18.741238    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:18.741271    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:33:19.106146    8683 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:33:19.109743    8683 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:33:19.117603    8683 addons.go:510] duration metric: took 30.484320417s for enable addons: enabled=[storage-provisioner]
	I0731 12:33:23.741954    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:23.741987    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:28.742988    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:28.743021    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:33.744256    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:33.744300    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:38.745926    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:38.745984    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:43.748007    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:43.748051    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:48.750312    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:48.750479    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:48.761769    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:33:48.761841    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:48.772253    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:33:48.772320    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:48.782944    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:33:48.783009    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:48.793420    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:33:48.793488    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:48.804214    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:33:48.804287    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:48.817149    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:33:48.817225    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:48.827150    8683 logs.go:276] 0 containers: []
	W0731 12:33:48.827162    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:48.827230    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:48.838064    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:33:48.838081    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:48.838086    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:48.862958    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:48.862972    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:48.867691    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:33:48.867697    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:33:48.881234    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:33:48.881248    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:33:48.897906    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:33:48.897920    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:33:48.909671    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:33:48.909682    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:33:48.921742    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:33:48.921753    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:33:48.939874    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:33:48.939885    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:33:48.951752    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:33:48.951764    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:48.963825    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:48.963836    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:49.001127    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:49.001136    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:49.038534    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:33:49.038545    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:33:49.054095    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:33:49.054106    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:33:51.573677    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:56.575856    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:56.575982    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:56.587167    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:33:56.587252    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:56.598402    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:33:56.598475    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:56.609257    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:33:56.609327    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:56.620047    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:33:56.620120    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:56.630269    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:33:56.630340    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:56.640739    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:33:56.640805    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:56.651164    8683 logs.go:276] 0 containers: []
	W0731 12:33:56.651174    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:56.651234    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:56.661706    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:33:56.661721    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:33:56.661727    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:33:56.675594    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:33:56.675603    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:33:56.693717    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:33:56.693728    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:33:56.711200    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:33:56.711212    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:33:56.722883    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:33:56.722897    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:56.734421    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:56.734435    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:56.757811    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:56.757819    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:56.794989    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:56.794997    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:56.799542    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:56.799552    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:56.835698    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:33:56.835710    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:33:56.847894    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:33:56.847908    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:33:56.860008    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:33:56.860019    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:33:56.875205    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:33:56.875217    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:33:59.389669    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:04.391624    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:04.391866    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:04.419441    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:04.419540    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:04.443833    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:04.443911    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:04.456426    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:04.456493    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:04.466943    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:04.467015    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:04.477488    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:04.477563    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:04.488270    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:04.488336    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:04.499066    8683 logs.go:276] 0 containers: []
	W0731 12:34:04.499076    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:04.499128    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:04.509639    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:04.509655    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:04.509660    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:04.528292    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:04.528303    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:04.542050    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:04.542061    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:04.553766    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:04.553777    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:04.574508    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:04.574517    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:04.586010    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:04.586020    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:04.609560    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:04.609568    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:04.647398    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:04.647410    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:04.651970    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:04.651976    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:04.685672    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:04.685688    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:04.697755    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:04.697765    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:04.719771    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:04.719782    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:04.734039    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:04.734051    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:07.247281    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:12.249808    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:12.249957    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:12.263462    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:12.263546    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:12.275078    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:12.275149    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:12.285208    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:12.285276    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:12.295674    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:12.295740    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:12.306176    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:12.306246    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:12.316963    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:12.317030    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:12.327507    8683 logs.go:276] 0 containers: []
	W0731 12:34:12.327519    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:12.327578    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:12.337817    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:12.337834    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:12.337840    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:12.349707    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:12.349718    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:12.361423    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:12.361434    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:12.375680    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:12.375690    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:12.386853    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:12.386865    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:12.425700    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:12.425714    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:12.430379    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:12.430385    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:12.471557    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:12.471566    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:12.485894    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:12.485903    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:12.499906    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:12.499919    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:12.515011    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:12.515025    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:12.534906    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:12.534915    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:12.557765    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:12.557772    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:15.070578    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:20.072746    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:20.072965    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:20.095791    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:20.095885    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:20.109106    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:20.109167    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:20.127133    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:20.127191    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:20.139255    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:20.139319    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:20.149790    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:20.149847    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:20.160037    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:20.160102    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:20.170405    8683 logs.go:276] 0 containers: []
	W0731 12:34:20.170416    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:20.170466    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:20.180685    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:20.180700    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:20.180707    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:20.193484    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:20.193494    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:20.229896    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:20.229906    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:20.234558    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:20.234568    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:20.268235    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:20.268246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:20.280423    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:20.280435    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:20.292016    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:20.292030    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:20.303868    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:20.303879    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:20.329028    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:20.329036    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:20.343679    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:20.343692    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:20.357357    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:20.357365    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:20.372624    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:20.372634    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:20.384012    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:20.384024    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:22.903220    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:29.379455    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:29.379703    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:29.404946    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:29.405074    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:29.421499    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:29.421579    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:29.435118    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:29.435202    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:29.446620    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:29.446688    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:29.456663    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:29.456728    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:29.467180    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:29.467254    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:29.477420    8683 logs.go:276] 0 containers: []
	W0731 12:34:29.477435    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:29.477496    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:29.489022    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:29.489038    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:29.489044    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:29.501093    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:29.501103    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:29.539993    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:29.540008    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:29.554477    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:29.554487    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:29.568141    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:29.568150    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:29.579577    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:29.579589    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:29.594567    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:29.594578    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:29.612403    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:29.612414    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:29.624339    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:29.624349    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:29.629543    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:29.629550    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:29.663567    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:29.663577    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:29.674944    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:29.674955    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:29.686424    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:29.686435    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:32.211434    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:37.212238    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:37.212339    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:37.226035    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:37.226109    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:37.237120    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:37.237190    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:37.247633    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:37.247703    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:37.266729    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:37.266802    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:37.278198    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:37.278275    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:37.288923    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:37.288993    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:37.299093    8683 logs.go:276] 0 containers: []
	W0731 12:34:37.299103    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:37.299162    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:37.309537    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:37.309554    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:37.309559    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:37.314108    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:37.314114    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:37.328576    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:37.328585    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:37.352029    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:37.352036    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:37.366849    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:37.366860    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:37.380858    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:37.380868    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:37.398414    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:37.398425    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:37.436326    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:37.436334    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:37.473365    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:37.473380    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:37.487676    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:37.487686    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:37.500509    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:37.500521    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:37.513381    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:37.513392    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:37.525129    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:37.525139    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:40.040758    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:45.042987    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:45.043160    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:45.061010    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:45.061097    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:45.074607    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:45.074677    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:45.086138    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:45.086206    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:45.097262    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:45.097331    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:45.108058    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:45.108119    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:45.119145    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:45.119205    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:45.129491    8683 logs.go:276] 0 containers: []
	W0731 12:34:45.129502    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:45.129556    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:45.140584    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:45.140601    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:45.140607    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:45.155966    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:45.155976    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:45.173644    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:45.173654    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:45.199040    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:45.199050    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:45.234986    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:45.234995    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:45.247776    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:45.247787    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:45.262314    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:45.262324    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:45.276049    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:45.276059    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:45.289817    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:45.289830    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:45.302226    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:45.302235    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:45.317792    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:45.317804    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:45.329707    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:45.329716    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:45.334098    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:45.334105    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:47.871335    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:52.873547    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:52.873723    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:52.892636    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:52.892739    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:52.906906    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:52.906978    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:52.918804    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:52.918865    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:52.929433    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:52.929506    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:52.940353    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:52.940427    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:52.951360    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:52.951425    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:52.961747    8683 logs.go:276] 0 containers: []
	W0731 12:34:52.961758    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:52.961823    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:52.972505    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:52.972521    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:52.972526    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:52.986395    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:52.986405    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:53.001871    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:53.001882    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:53.013996    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:53.014007    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:53.032268    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:53.032279    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:53.044299    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:53.044312    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:53.049341    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:53.049348    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:53.086117    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:53.086129    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:53.097785    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:53.097797    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:53.109651    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:53.109664    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:53.121411    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:53.121421    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:53.146154    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:53.146164    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:53.183603    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:53.183616    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:55.700359    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:00.702485    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:00.702648    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:00.734469    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:00.734536    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:00.746378    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:00.746456    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:00.757191    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:00.757257    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:00.768671    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:00.768733    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:00.779651    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:00.779716    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:00.790495    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:00.790569    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:00.801106    8683 logs.go:276] 0 containers: []
	W0731 12:35:00.801122    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:00.801181    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:00.817821    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:00.817840    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:00.817846    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:00.829605    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:00.829616    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:00.845724    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:00.845735    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:00.857337    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:00.857350    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:00.873978    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:00.873987    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:00.885362    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:00.885372    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:00.908890    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:00.908899    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:00.923475    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:00.923484    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:00.941357    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:00.941368    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:00.954861    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:00.954874    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:00.969791    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:00.969801    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:00.981867    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:00.981877    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:01.021471    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:01.021483    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:01.035741    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:01.035750    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:01.071577    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:01.071585    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:03.578508    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:08.580785    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:08.580960    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:08.600934    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:08.601017    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:08.625831    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:08.625910    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:08.636501    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:08.636574    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:08.650214    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:08.650283    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:08.660649    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:08.660726    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:08.672532    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:08.672603    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:08.682659    8683 logs.go:276] 0 containers: []
	W0731 12:35:08.682670    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:08.682733    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:08.693421    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:08.693440    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:08.693444    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:08.705507    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:08.705517    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:08.720549    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:08.720559    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:08.738009    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:08.738021    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:08.742702    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:08.742708    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:08.778831    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:08.778843    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:08.793066    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:08.793077    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:08.805922    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:08.805934    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:08.842002    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:08.842010    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:08.853137    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:08.853148    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:08.864373    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:08.864384    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:08.878803    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:08.878815    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:08.890738    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:08.890751    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:08.903009    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:08.903018    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:08.914577    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:08.914591    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:11.441460    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:16.443799    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:16.443939    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:16.469802    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:16.469875    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:16.480943    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:16.481022    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:16.493029    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:16.493104    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:16.504778    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:16.504858    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:16.515774    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:16.515865    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:16.526534    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:16.526599    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:16.537766    8683 logs.go:276] 0 containers: []
	W0731 12:35:16.537775    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:16.537835    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:16.548986    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:16.549002    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:16.549009    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:16.573207    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:16.573216    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:16.610528    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:16.610535    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:16.624516    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:16.624525    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:16.636134    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:16.636149    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:16.651698    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:16.651708    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:16.663730    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:16.663739    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:16.683682    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:16.683691    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:16.688830    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:16.688837    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:16.703233    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:16.703243    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:16.714854    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:16.714865    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:16.727391    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:16.727404    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:16.739404    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:16.739418    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:16.754181    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:16.754194    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:16.789930    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:16.789941    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:19.304371    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:24.306803    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:24.307207    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:24.342670    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:24.342790    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:24.362028    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:24.362110    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:24.377279    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:24.377352    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:24.389726    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:24.389796    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:24.400854    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:24.400911    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:24.412203    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:24.412266    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:24.423773    8683 logs.go:276] 0 containers: []
	W0731 12:35:24.423787    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:24.423848    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:24.434922    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:24.434939    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:24.434944    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:24.447653    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:24.447664    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:24.473408    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:24.473418    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:24.485427    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:24.485440    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:24.500985    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:24.500999    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:24.516468    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:24.516480    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:24.535097    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:24.535108    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:24.540219    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:24.540226    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:24.554057    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:24.554071    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:24.568364    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:24.568376    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:24.579909    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:24.579921    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:24.593110    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:24.593122    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:24.606726    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:24.606739    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:24.618919    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:24.618932    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:24.657492    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:24.657500    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:27.193395    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:32.194063    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:32.194261    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:32.216205    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:32.216315    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:32.231668    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:32.231751    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:32.244418    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:32.244495    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:32.255100    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:32.255171    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:32.265835    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:32.265906    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:32.276406    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:32.276477    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:32.286709    8683 logs.go:276] 0 containers: []
	W0731 12:35:32.286719    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:32.286776    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:32.296801    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:32.296817    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:32.296821    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:32.336487    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:32.336497    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:32.348399    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:32.348408    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:32.360187    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:32.360197    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:32.371707    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:32.371717    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:32.383745    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:32.383754    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:32.407376    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:32.407386    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:32.424805    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:32.424816    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:32.439704    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:32.439714    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:32.452174    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:32.452184    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:32.488096    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:32.488112    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:32.508269    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:32.508280    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:32.522819    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:32.522833    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:32.534457    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:32.534467    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:32.539374    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:32.539381    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:35.059446    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:40.061761    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:40.061902    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:40.073570    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:40.073640    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:40.086657    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:40.086735    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:40.097872    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:40.097938    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:40.108730    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:40.108803    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:40.119631    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:40.119705    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:40.130797    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:40.130866    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:40.141218    8683 logs.go:276] 0 containers: []
	W0731 12:35:40.141229    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:40.141284    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:40.151882    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:40.151900    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:40.151906    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:40.175234    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:40.175241    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:40.187353    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:40.187364    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:40.199472    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:40.199484    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:40.214166    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:40.214179    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:40.228370    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:40.228382    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:40.240171    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:40.240181    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:40.252738    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:40.252749    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:40.267831    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:40.267843    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:40.286129    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:40.286142    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:40.323158    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:40.323165    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:40.358922    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:40.358938    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:40.371286    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:40.371301    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:40.376283    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:40.376290    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:40.388245    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:40.388255    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:42.902183    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:47.904607    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:47.904833    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:47.932814    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:47.932908    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:47.949217    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:47.949289    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:47.961433    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:47.961506    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:47.972237    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:47.972304    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:47.982909    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:47.982978    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:47.993481    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:47.993549    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:48.004320    8683 logs.go:276] 0 containers: []
	W0731 12:35:48.004332    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:48.004392    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:48.015843    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:48.015862    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:48.015868    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:48.030255    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:48.030264    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:48.042451    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:48.042460    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:48.059873    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:48.059884    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:48.072152    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:48.072164    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:48.108446    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:48.108454    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:48.120008    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:48.120019    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:48.132237    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:48.132246    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:48.168537    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:48.168547    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:48.188514    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:48.188524    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:48.213191    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:48.213199    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:48.228704    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:48.228714    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:48.240659    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:48.240669    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:48.245419    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:48.245425    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:48.259495    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:48.259505    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:50.779487    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:55.781733    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:55.782093    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:55.819446    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:55.819582    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:55.840883    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:55.840977    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:55.856594    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:55.856676    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:55.870202    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:55.870274    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:55.881455    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:55.881521    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:55.892662    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:55.892732    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:55.903040    8683 logs.go:276] 0 containers: []
	W0731 12:35:55.903052    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:55.903111    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:55.913565    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:55.913583    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:55.913588    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:55.929341    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:55.929350    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:55.940670    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:55.940679    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:55.957971    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:55.957982    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:55.962894    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:55.962909    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:55.975636    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:55.975650    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:56.001650    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:56.001660    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:56.015334    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:56.015345    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:56.053503    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:56.053512    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:56.067402    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:56.067412    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:56.086691    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:56.086701    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:56.121444    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:56.121456    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:56.136160    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:56.136173    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:56.151687    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:56.151700    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:56.167245    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:56.167258    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:58.685783    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:03.688149    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:03.688439    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:03.720564    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:03.720698    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:03.740441    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:03.740538    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:03.757465    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:03.757550    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:03.771686    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:03.771765    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:03.783644    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:03.783720    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:03.794447    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:03.794520    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:03.805777    8683 logs.go:276] 0 containers: []
	W0731 12:36:03.805789    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:03.805848    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:03.817841    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:03.817857    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:03.817862    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:03.829674    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:03.829687    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:03.868433    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:03.868442    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:03.903624    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:03.903636    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:03.917714    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:03.917727    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:03.922668    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:03.922677    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:03.937962    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:03.937975    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:03.956400    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:03.956411    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:03.968063    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:03.968074    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:03.979665    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:03.979675    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:04.005839    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:04.005853    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:04.019798    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:04.019811    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:04.032273    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:04.032285    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:04.059972    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:04.059985    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:04.074195    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:04.074206    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:06.587524    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:11.589859    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:11.589982    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:11.606383    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:11.606463    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:11.619253    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:11.619313    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:11.630508    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:11.630566    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:11.641325    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:11.641394    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:11.652098    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:11.652163    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:11.662630    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:11.662699    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:11.673356    8683 logs.go:276] 0 containers: []
	W0731 12:36:11.673367    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:11.673428    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:11.687021    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:11.687040    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:11.687045    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:11.711673    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:11.711683    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:11.734871    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:11.734878    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:11.747226    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:11.747238    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:11.759283    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:11.759293    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:11.771095    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:11.771105    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:11.783206    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:11.783217    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:11.795029    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:11.795040    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:11.813559    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:11.813573    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:11.825622    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:11.825632    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:11.862064    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:11.862075    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:11.876306    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:11.876319    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:11.914707    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:11.914715    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:11.929708    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:11.929720    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:11.934651    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:11.934657    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:14.451769    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:19.454051    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:19.454238    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:19.465801    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:19.465883    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:19.480452    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:19.480522    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:19.490837    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:19.490916    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:19.502098    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:19.502171    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:19.517037    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:19.517109    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:19.527170    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:19.527243    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:19.537928    8683 logs.go:276] 0 containers: []
	W0731 12:36:19.537949    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:19.538010    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:19.549025    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:19.549043    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:19.549048    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:19.572907    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:19.572917    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:19.577508    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:19.577516    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:19.591473    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:19.591487    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:19.608301    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:19.608313    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:19.620212    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:19.620226    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:19.658174    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:19.658185    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:19.674529    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:19.674539    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:19.691877    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:19.691887    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:19.725956    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:19.725967    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:19.739941    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:19.739951    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:19.751657    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:19.751666    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:19.765031    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:19.765043    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:19.777095    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:19.777106    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:19.788517    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:19.788528    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:22.302557    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:27.304812    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:27.304953    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:27.320116    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:27.320205    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:27.332668    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:27.332744    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:27.343629    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:27.343700    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:27.355138    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:27.355207    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:27.365769    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:27.365841    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:27.385094    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:27.385160    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:27.395271    8683 logs.go:276] 0 containers: []
	W0731 12:36:27.395284    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:27.395341    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:27.405340    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:27.405358    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:27.405364    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:27.418844    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:27.418855    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:27.432797    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:27.432812    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:27.534509    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:27.534523    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:27.550837    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:27.550849    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:27.570249    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:27.570260    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:27.582227    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:27.582237    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:27.598530    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:27.598543    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:27.611375    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:27.611391    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:27.629289    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:27.629303    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:27.641245    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:27.641256    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:27.658118    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:27.658132    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:27.669311    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:27.669325    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:27.693503    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:27.693512    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:27.731082    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:27.731088    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:30.236063    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:35.236349    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:35.236617    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:35.265244    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:35.265374    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:35.283535    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:35.283634    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:35.297520    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:35.297589    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:35.309393    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:35.309463    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:35.320010    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:35.320068    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:35.330901    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:35.330973    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:35.341211    8683 logs.go:276] 0 containers: []
	W0731 12:36:35.341222    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:35.341272    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:35.352237    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:35.352254    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:35.352259    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:35.369773    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:35.369787    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:35.381109    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:35.381119    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:35.396395    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:35.396406    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:35.409222    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:35.409231    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:35.421027    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:35.421036    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:35.432768    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:35.432777    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:35.457159    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:35.457166    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:35.468897    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:35.468908    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:35.507295    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:35.507308    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:35.526820    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:35.526831    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:35.538960    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:35.538973    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:35.554580    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:35.554595    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:35.559631    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:35.559639    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:35.596771    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:35.596784    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:38.109003    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:43.111168    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:43.111380    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:43.129435    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:43.129524    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:43.144313    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:43.144388    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:43.155634    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:43.155711    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:43.166731    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:43.166797    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:43.180817    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:43.180889    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:43.192421    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:43.192490    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:43.202894    8683 logs.go:276] 0 containers: []
	W0731 12:36:43.202909    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:43.202967    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:43.213938    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:43.213958    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:43.213963    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:43.236960    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:43.236967    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:43.248734    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:43.248747    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:43.262610    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:43.262624    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:43.274091    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:43.274104    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:43.289321    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:43.289334    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:43.303672    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:43.303684    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:43.341000    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:43.341011    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:43.353025    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:43.353039    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:43.365030    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:43.365046    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:43.376560    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:43.376571    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:43.388390    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:43.388402    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:43.427016    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:43.427024    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:43.441874    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:43.441884    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:43.459326    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:43.459337    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:45.965865    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:50.968180    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:50.973586    8683 out.go:177] 
	W0731 12:36:50.977525    8683 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:36:50.977531    8683 out.go:239] * 
	* 
	W0731 12:36:50.978025    8683 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:36:50.985549    8683 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-568000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-31 12:36:51.084401 -0700 PDT m=+1327.948083501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-568000 -n running-upgrade-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-568000 -n running-upgrade-568000: exit status 2 (15.620163042s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-568000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo cat                            | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo cat                            | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo cat                            | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo cat                            | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo                                | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo find                           | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-782000 sudo crio                           | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-782000                                     | cilium-782000             | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT | 31 Jul 24 12:26 PDT |
	| start   | -p kubernetes-upgrade-490000                         | kubernetes-upgrade-490000 | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-353000                             | offline-docker-353000     | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT | 31 Jul 24 12:26 PDT |
	| start   | -p stopped-upgrade-443000                            | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:26 PDT | 31 Jul 24 12:27 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-490000                         | kubernetes-upgrade-490000 | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT | 31 Jul 24 12:26 PDT |
	| start   | -p kubernetes-upgrade-490000                         | kubernetes-upgrade-490000 | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-490000                         | kubernetes-upgrade-490000 | jenkins | v1.33.1 | 31 Jul 24 12:27 PDT | 31 Jul 24 12:27 PDT |
	| start   | -p running-upgrade-568000                            | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:27 PDT | 31 Jul 24 12:28 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-443000 stop                          | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:27 PDT | 31 Jul 24 12:27 PDT |
	| start   | -p stopped-upgrade-443000                            | stopped-upgrade-443000    | jenkins | v1.33.1 | 31 Jul 24 12:27 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-568000                            | running-upgrade-568000    | jenkins | v1.33.1 | 31 Jul 24 12:28 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:28:06
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:28:06.359308    8683 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:28:06.359429    8683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:06.359433    8683 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:06.359436    8683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:06.359593    8683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:28:06.360737    8683 out.go:298] Setting JSON to false
	I0731 12:28:06.378200    8683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5255,"bootTime":1722448831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:28:06.378295    8683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:28:06.383225    8683 out.go:177] * [running-upgrade-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:28:06.391242    8683 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:28:06.391273    8683 notify.go:220] Checking for updates...
	I0731 12:28:06.398211    8683 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:28:06.402233    8683 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:28:06.405241    8683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:28:06.408232    8683 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:28:06.411231    8683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:28:06.414443    8683 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:28:06.417134    8683 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:28:06.420207    8683 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:28:06.423204    8683 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:28:06.430187    8683 start.go:297] selected driver: qemu2
	I0731 12:28:06.430194    8683 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:06.430246    8683 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:28:06.432471    8683 cni.go:84] Creating CNI manager for ""
	I0731 12:28:06.432489    8683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:28:06.432519    8683 start.go:340] cluster config:
	{Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:06.432567    8683 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:28:06.440231    8683 out.go:177] * Starting "running-upgrade-568000" primary control-plane node in "running-upgrade-568000" cluster
	I0731 12:28:06.444202    8683 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:28:06.444216    8683 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:28:06.444227    8683 cache.go:56] Caching tarball of preloaded images
	I0731 12:28:06.444270    8683 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:28:06.444275    8683 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:28:06.444321    8683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/config.json ...
	I0731 12:28:06.444691    8683 start.go:360] acquireMachinesLock for running-upgrade-568000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:28:18.173917    8683 start.go:364] duration metric: took 11.72940225s to acquireMachinesLock for "running-upgrade-568000"
	I0731 12:28:18.173943    8683 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:28:18.173953    8683 fix.go:54] fixHost starting: 
	I0731 12:28:18.174747    8683 fix.go:112] recreateIfNeeded on running-upgrade-568000: state=Running err=<nil>
	W0731 12:28:18.174756    8683 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:28:18.177733    8683 out.go:177] * Updating the running qemu2 "running-upgrade-568000" VM ...
	I0731 12:28:17.179825    8672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/config.json ...
	I0731 12:28:17.180053    8672 machine.go:94] provisionDockerMachine start ...
	I0731 12:28:17.180095    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.180239    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.180243    8672 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:28:17.249233    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 12:28:17.249249    8672 buildroot.go:166] provisioning hostname "stopped-upgrade-443000"
	I0731 12:28:17.249328    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.249452    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.249457    8672 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-443000 && echo "stopped-upgrade-443000" | sudo tee /etc/hostname
	I0731 12:28:17.321340    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-443000
	
	I0731 12:28:17.321408    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.321534    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.321542    8672 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-443000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-443000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-443000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:28:17.394025    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:28:17.394041    8672 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19360-6578/.minikube CaCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19360-6578/.minikube}
	I0731 12:28:17.394050    8672 buildroot.go:174] setting up certificates
	I0731 12:28:17.394056    8672 provision.go:84] configureAuth start
	I0731 12:28:17.394065    8672 provision.go:143] copyHostCerts
	I0731 12:28:17.394159    8672 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem, removing ...
	I0731 12:28:17.394166    8672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem
	I0731 12:28:17.394270    8672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem (1078 bytes)
	I0731 12:28:17.394443    8672 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem, removing ...
	I0731 12:28:17.394447    8672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem
	I0731 12:28:17.394495    8672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem (1123 bytes)
	I0731 12:28:17.394597    8672 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem, removing ...
	I0731 12:28:17.394602    8672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem
	I0731 12:28:17.394646    8672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem (1679 bytes)
	I0731 12:28:17.394722    8672 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-443000 san=[127.0.0.1 localhost minikube stopped-upgrade-443000]
	I0731 12:28:17.485777    8672 provision.go:177] copyRemoteCerts
	I0731 12:28:17.485829    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:28:17.485838    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:28:17.523006    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 12:28:17.530510    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:28:17.536946    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:28:17.542916    8672 provision.go:87] duration metric: took 148.859125ms to configureAuth
	I0731 12:28:17.542924    8672 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:28:17.543024    8672 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:28:17.543071    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.543154    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.543163    8672 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:28:17.612032    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:28:17.612046    8672 buildroot.go:70] root file system type: tmpfs
	I0731 12:28:17.612101    8672 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:28:17.612154    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.612268    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.612301    8672 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:28:17.684357    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:28:17.684415    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.684541    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.684549    8672 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:28:18.054801    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 12:28:18.054814    8672 machine.go:97] duration metric: took 874.7695ms to provisionDockerMachine
	I0731 12:28:18.054821    8672 start.go:293] postStartSetup for "stopped-upgrade-443000" (driver="qemu2")
	I0731 12:28:18.054828    8672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:28:18.054896    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:28:18.054910    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:28:18.091783    8672 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:28:18.093257    8672 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:28:18.093267    8672 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/addons for local assets ...
	I0731 12:28:18.093354    8672 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/files for local assets ...
	I0731 12:28:18.093471    8672 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem -> 70682.pem in /etc/ssl/certs
	I0731 12:28:18.093604    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:28:18.096319    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:18.103027    8672 start.go:296] duration metric: took 48.201958ms for postStartSetup
	I0731 12:28:18.103039    8672 fix.go:56] duration metric: took 21.834501875s for fixHost
	I0731 12:28:18.103068    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.103170    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:18.103175    8672 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 12:28:18.173855    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454098.070024296
	
	I0731 12:28:18.173867    8672 fix.go:216] guest clock: 1722454098.070024296
	I0731 12:28:18.173871    8672 fix.go:229] Guest: 2024-07-31 12:28:18.070024296 -0700 PDT Remote: 2024-07-31 12:28:18.103041 -0700 PDT m=+21.947280293 (delta=-33.016704ms)
	I0731 12:28:18.173883    8672 fix.go:200] guest clock delta is within tolerance: -33.016704ms
	I0731 12:28:18.173885    8672 start.go:83] releasing machines lock for "stopped-upgrade-443000", held for 21.905357291s
	I0731 12:28:18.173954    8672 ssh_runner.go:195] Run: cat /version.json
	I0731 12:28:18.173963    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:28:18.173992    8672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:28:18.174013    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	W0731 12:28:18.174725    8672 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51213: connect: connection refused
	I0731 12:28:18.174760    8672 retry.go:31] will retry after 185.307583ms: dial tcp [::1]:51213: connect: connection refused
	W0731 12:28:18.212206    8672 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:28:18.212293    8672 ssh_runner.go:195] Run: systemctl --version
	I0731 12:28:18.214276    8672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:28:18.215960    8672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:28:18.215999    8672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:28:18.219020    8672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:28:18.223999    8672 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:28:18.224013    8672 start.go:495] detecting cgroup driver to use...
	I0731 12:28:18.224129    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:18.231812    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:28:18.235472    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:28:18.239161    8672 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:28:18.239214    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:28:18.243214    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:18.247207    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:28:18.250748    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:18.254496    8672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:28:18.258776    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:28:18.262321    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:28:18.265378    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:28:18.268097    8672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:28:18.271525    8672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:28:18.275118    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:18.347392    8672 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:28:18.359739    8672 start.go:495] detecting cgroup driver to use...
	I0731 12:28:18.359810    8672 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:28:18.377769    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:18.385201    8672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:28:18.394561    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:18.402672    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:28:18.444596    8672 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 12:28:18.494803    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:28:18.500405    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:18.507700    8672 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:28:18.509421    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:28:18.512478    8672 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:28:18.519171    8672 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:28:18.597899    8672 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:28:18.675428    8672 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:28:18.675492    8672 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:28:18.681187    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:18.754368    8672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:19.880444    8672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1260745s)
	I0731 12:28:19.880504    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:28:19.885374    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:19.890399    8672 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:28:19.959097    8672 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:28:20.019480    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:20.103252    8672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:28:20.109650    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:20.113932    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:20.181598    8672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:28:20.226151    8672 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:28:20.226233    8672 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:28:20.228914    8672 start.go:563] Will wait 60s for crictl version
	I0731 12:28:20.228975    8672 ssh_runner.go:195] Run: which crictl
	I0731 12:28:20.230482    8672 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:28:20.246201    8672 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:28:20.246277    8672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:20.268314    8672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:20.292074    8672 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:28:20.292143    8672 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:28:20.293691    8672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:28:20.297767    8672 kubeadm.go:883] updating cluster {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51245 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:28:20.297821    8672 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:28:20.297870    8672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:20.309446    8672 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:20.309457    8672 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:20.309509    8672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:20.312771    8672 ssh_runner.go:195] Run: which lz4
	I0731 12:28:20.314431    8672 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 12:28:20.315824    8672 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:28:20.315842    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:28:18.184588    8683 machine.go:94] provisionDockerMachine start ...
	I0731 12:28:18.184642    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.184769    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.184774    8683 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:28:18.241381    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-568000
	
	I0731 12:28:18.241398    8683 buildroot.go:166] provisioning hostname "running-upgrade-568000"
	I0731 12:28:18.241439    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.241559    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.241566    8683 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-568000 && echo "running-upgrade-568000" | sudo tee /etc/hostname
	I0731 12:28:18.302939    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-568000
	
	I0731 12:28:18.302993    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.303110    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.303118    8683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-568000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-568000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-568000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:28:18.369370    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:28:18.369385    8683 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19360-6578/.minikube CaCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19360-6578/.minikube}
	I0731 12:28:18.369394    8683 buildroot.go:174] setting up certificates
	I0731 12:28:18.369398    8683 provision.go:84] configureAuth start
	I0731 12:28:18.369410    8683 provision.go:143] copyHostCerts
	I0731 12:28:18.369516    8683 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem, removing ...
	I0731 12:28:18.369527    8683 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem
	I0731 12:28:18.369657    8683 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem (1078 bytes)
	I0731 12:28:18.369837    8683 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem, removing ...
	I0731 12:28:18.369842    8683 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem
	I0731 12:28:18.369891    8683 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem (1123 bytes)
	I0731 12:28:18.369991    8683 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem, removing ...
	I0731 12:28:18.369995    8683 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem
	I0731 12:28:18.370035    8683 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem (1679 bytes)
	I0731 12:28:18.370169    8683 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-568000 san=[127.0.0.1 localhost minikube running-upgrade-568000]
	I0731 12:28:18.592139    8683 provision.go:177] copyRemoteCerts
	I0731 12:28:18.592194    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:28:18.592204    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:28:18.624530    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 12:28:18.631644    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:28:18.647040    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:28:18.658117    8683 provision.go:87] duration metric: took 288.717416ms to configureAuth
	I0731 12:28:18.658149    8683 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:28:18.658277    8683 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:28:18.658318    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.658412    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.658418    8683 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:28:18.714278    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:28:18.714290    8683 buildroot.go:70] root file system type: tmpfs
	I0731 12:28:18.714351    8683 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:28:18.714407    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.714529    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.714562    8683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:28:18.773381    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:28:18.773443    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.773568    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.773577    8683 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:28:18.830650    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:28:18.830660    8683 machine.go:97] duration metric: took 646.076875ms to provisionDockerMachine
	I0731 12:28:18.830666    8683 start.go:293] postStartSetup for "running-upgrade-568000" (driver="qemu2")
	I0731 12:28:18.830671    8683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:28:18.830719    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:28:18.830728    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:28:18.867778    8683 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:28:18.870337    8683 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:28:18.870345    8683 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/addons for local assets ...
	I0731 12:28:18.870420    8683 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/files for local assets ...
	I0731 12:28:18.870507    8683 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem -> 70682.pem in /etc/ssl/certs
	I0731 12:28:18.870603    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:28:18.874658    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:18.883113    8683 start.go:296] duration metric: took 52.441041ms for postStartSetup
	I0731 12:28:18.883132    8683 fix.go:56] duration metric: took 709.196292ms for fixHost
	I0731 12:28:18.883185    8683 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.883324    8683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104802a10] 0x104805270 <nil>  [] 0s} localhost 51250 <nil> <nil>}
	I0731 12:28:18.883330    8683 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 12:28:18.945275    8683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454099.311756363
	
	I0731 12:28:18.945284    8683 fix.go:216] guest clock: 1722454099.311756363
	I0731 12:28:18.945287    8683 fix.go:229] Guest: 2024-07-31 12:28:19.311756363 -0700 PDT Remote: 2024-07-31 12:28:18.883134 -0700 PDT m=+12.543945335 (delta=428.622363ms)
	I0731 12:28:18.945298    8683 fix.go:200] guest clock delta is within tolerance: 428.622363ms
	I0731 12:28:18.945303    8683 start.go:83] releasing machines lock for "running-upgrade-568000", held for 771.387333ms
	I0731 12:28:18.945379    8683 ssh_runner.go:195] Run: cat /version.json
	I0731 12:28:18.945390    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:28:18.945379    8683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:28:18.945427    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	W0731 12:28:18.974217    8683 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:28:18.974280    8683 ssh_runner.go:195] Run: systemctl --version
	I0731 12:28:18.976636    8683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:28:18.980713    8683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:28:18.980764    8683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:28:18.984213    8683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:28:18.990638    8683 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:28:18.990652    8683 start.go:495] detecting cgroup driver to use...
	I0731 12:28:18.990726    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:19.000946    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:28:19.005324    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:28:19.008232    8683 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:28:19.008275    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:28:19.011559    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:19.015021    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:28:19.018392    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:19.030152    8683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:28:19.033245    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:28:19.036431    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:28:19.039750    8683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:28:19.044514    8683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:28:19.051732    8683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:28:19.057011    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:19.199556    8683 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:28:19.213364    8683 start.go:495] detecting cgroup driver to use...
	I0731 12:28:19.213436    8683 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:28:19.218646    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:19.223393    8683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:28:19.230437    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:19.234881    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:28:19.239442    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:19.245087    8683 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:28:19.246422    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:28:19.249047    8683 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:28:19.253889    8683 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:28:19.357461    8683 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:28:19.468103    8683 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:28:19.468162    8683 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:28:19.473475    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:19.568834    8683 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:21.239486    8672 docker.go:649] duration metric: took 925.106416ms to copy over tarball
	I0731 12:28:21.239550    8672 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:28:22.412347    8672 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17279975s)
	I0731 12:28:22.412361    8672 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:28:22.429101    8672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:22.432610    8672 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:28:22.438176    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:22.513881    8672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:24.213185    8672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.699313333s)
	I0731 12:28:24.213289    8672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:24.224978    8672 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:24.224986    8672 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:24.224991    8672 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:28:24.228868    8672 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.230803    8672 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.232576    8672 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.232605    8672 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.235111    8672 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.235131    8672 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.237361    8672 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:28:24.237361    8672 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.239418    8672 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.239618    8672 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.241504    8672 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:28:24.241686    8672 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.242864    8672 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.243058    8672 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.244465    8672 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.245350    8672 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.632976    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.644571    8672 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:28:24.644610    8672 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.644660    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.655818    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0731 12:28:24.678444    8672 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:24.678569    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.679421    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:28:24.681628    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.684938    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.690963    8672 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:28:24.690984    8672 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.691029    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.695870    8672 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:28:24.695891    8672 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:28:24.695937    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:28:24.702721    8672 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:28:24.702742    8672 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.702788    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.713723    8672 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:28:24.713744    8672 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.713786    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.718884    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:28:24.719002    8672 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:24.722473    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:28:24.722541    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:28:24.722574    8672 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 12:28:24.732092    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:28:24.732117    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:28:24.732128    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:28:24.732127    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:28:24.732160    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:28:24.732311    8672 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:24.738966    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.741624    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.743762    8672 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:28:24.743774    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 12:28:24.744141    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:28:24.744167    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:28:24.796725    8672 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:28:24.796756    8672 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.796787    8672 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:28:24.796820    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.796844    8672 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.796871    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.814212    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0731 12:28:24.836241    8672 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:24.836363    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.879934    8672 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:24.879974    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:28:24.880039    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:28:24.880293    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:28:24.903862    8672 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:28:24.903898    8672 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.903969    8672 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:25.020686    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:28:25.020739    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:28:25.020867    8672 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:28:25.028871    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 12:28:25.028905    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 12:28:25.111852    8672 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:28:25.111867    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 12:28:25.434944    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 12:28:25.434968    8672 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:25.434976    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:28:25.588692    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:28:25.588732    8672 cache_images.go:92] duration metric: took 1.363755541s to LoadCachedImages
	W0731 12:28:25.588777    8672 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0731 12:28:25.588783    8672 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:28:25.588856    8672 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-443000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:28:25.588922    8672 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:28:25.603413    8672 cni.go:84] Creating CNI manager for ""
	I0731 12:28:25.603424    8672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:28:25.603430    8672 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:28:25.603438    8672 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-443000 NodeName:stopped-upgrade-443000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:28:25.603500    8672 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-443000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:28:25.603554    8672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:28:25.607025    8672 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:28:25.607056    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:28:25.610305    8672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:28:25.615415    8672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:28:25.620552    8672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:28:25.626418    8672 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:28:25.627763    8672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:28:25.631521    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:25.703161    8672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:28:25.709183    8672 certs.go:68] Setting up /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000 for IP: 10.0.2.15
	I0731 12:28:25.709193    8672 certs.go:194] generating shared ca certs ...
	I0731 12:28:25.709203    8672 certs.go:226] acquiring lock for ca certs: {Name:mk2e60bc5d1dd01990778560005f880e3d93cfec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.709491    8672 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key
	I0731 12:28:25.709547    8672 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key
	I0731 12:28:25.709552    8672 certs.go:256] generating profile certs ...
	I0731 12:28:25.709637    8672 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.key
	I0731 12:28:25.709653    8672 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4
	I0731 12:28:25.709665    8672 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:28:25.773805    8672 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 ...
	I0731 12:28:25.773817    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4: {Name:mk4622d7feb6c59e775b77a6d0024e035ded3ead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.774160    8672 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 ...
	I0731 12:28:25.774165    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4: {Name:mk3e19b2276c5e5d3fd8c2bfa1bf3463fca3b07f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.774299    8672 certs.go:381] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt
	I0731 12:28:25.774432    8672 certs.go:385] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key
	I0731 12:28:25.774582    8672 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/proxy-client.key
	I0731 12:28:25.774717    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem (1338 bytes)
	W0731 12:28:25.774746    8672 certs.go:480] ignoring /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068_empty.pem, impossibly tiny 0 bytes
	I0731 12:28:25.774751    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 12:28:25.774775    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem (1078 bytes)
	I0731 12:28:25.774795    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:28:25.774812    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem (1679 bytes)
	I0731 12:28:25.774850    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:25.775203    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:28:25.782069    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:28:25.788850    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:28:25.795735    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:28:25.803052    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:28:25.810441    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:28:25.817358    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:28:25.824276    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 12:28:25.831257    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /usr/share/ca-certificates/70682.pem (1708 bytes)
	I0731 12:28:25.839025    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:28:25.846669    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem --> /usr/share/ca-certificates/7068.pem (1338 bytes)
	I0731 12:28:25.854905    8672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:28:25.861050    8672 ssh_runner.go:195] Run: openssl version
	I0731 12:28:25.863176    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70682.pem && ln -fs /usr/share/ca-certificates/70682.pem /etc/ssl/certs/70682.pem"
	I0731 12:28:25.867116    8672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70682.pem
	I0731 12:28:25.868909    8672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:16 /usr/share/ca-certificates/70682.pem
	I0731 12:28:25.868938    8672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70682.pem
	I0731 12:28:25.870880    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70682.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:28:25.874734    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:28:25.878457    8672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:25.880332    8672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:27 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:25.880416    8672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:25.882450    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:28:25.886298    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7068.pem && ln -fs /usr/share/ca-certificates/7068.pem /etc/ssl/certs/7068.pem"
	I0731 12:28:25.889503    8672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7068.pem
	I0731 12:28:25.891120    8672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:16 /usr/share/ca-certificates/7068.pem
	I0731 12:28:25.891148    8672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7068.pem
	I0731 12:28:25.893522    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7068.pem /etc/ssl/certs/51391683.0"
	I0731 12:28:25.896912    8672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:28:25.898628    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:28:25.901096    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:28:25.903243    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:28:25.905462    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:28:25.907636    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:28:25.909986    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:28:25.912105    8672 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51245 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:25.912184    8672 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:25.924026    8672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:28:25.927887    8672 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:28:25.927896    8672 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:28:25.927940    8672 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:28:25.931447    8672 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:25.931492    8672 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-443000" does not appear in /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:28:25.931508    8672 kubeconfig.go:62] /Users/jenkins/minikube-integration/19360-6578/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-443000" cluster setting kubeconfig missing "stopped-upgrade-443000" context setting]
	I0731 12:28:25.931693    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.932333    8672 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:28:25.933223    8672 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:28:25.936571    8672 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-443000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:28:25.936582    8672 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:28:25.936638    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:25.948928    8672 docker.go:483] Stopping containers: [bc8f9494b72e 681b91b46f8a d36958118793 c9212cfe387a 420f9dcb4cd0 a607e0e22226 dd7327a89049 575c86423b5f]
	I0731 12:28:25.949003    8672 ssh_runner.go:195] Run: docker stop bc8f9494b72e 681b91b46f8a d36958118793 c9212cfe387a 420f9dcb4cd0 a607e0e22226 dd7327a89049 575c86423b5f
	I0731 12:28:25.961710    8672 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:28:25.967464    8672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:28:25.971008    8672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:28:25.971017    8672 kubeadm.go:157] found existing configuration files:
	
	I0731 12:28:25.971056    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf
	I0731 12:28:25.974143    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:28:25.974180    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:28:25.976934    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf
	I0731 12:28:25.979629    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:28:25.979668    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:28:25.982971    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf
	I0731 12:28:25.986115    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:28:25.986166    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:28:25.989379    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf
	I0731 12:28:25.992149    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:28:25.992206    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:28:25.995594    8672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:28:25.998815    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.025332    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:27.392013    8683 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.823257834s)
	I0731 12:28:27.392147    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:28:27.397858    8683 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 12:28:27.406836    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:27.412244    8683 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:28:27.503759    8683 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:28:27.597822    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:27.667839    8683 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:28:27.674051    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:27.678845    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:27.754764    8683 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:28:27.798555    8683 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:28:27.798630    8683 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:28:27.802562    8683 start.go:563] Will wait 60s for crictl version
	I0731 12:28:27.802618    8683 ssh_runner.go:195] Run: which crictl
	I0731 12:28:27.804794    8683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:28:27.819212    8683 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:28:27.819271    8683 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:27.832708    8683 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:26.390636    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.501263    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.529900    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.552983    8672 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:28:26.553060    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:27.055224    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:27.555096    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:27.560930    8672 api_server.go:72] duration metric: took 1.007963584s to wait for apiserver process to appear ...
	I0731 12:28:27.560939    8672 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:28:27.560948    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:27.850612    8683 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:28:27.850671    8683 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:28:27.851986    8683 kubeadm.go:883] updating cluster {Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:28:27.852039    8683 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:28:27.852078    8683 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:27.863237    8683 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:27.863246    8683 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:27.863289    8683 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:27.866384    8683 ssh_runner.go:195] Run: which lz4
	I0731 12:28:27.867608    8683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 12:28:27.869056    8683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:28:27.869070    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:28:28.841132    8683 docker.go:649] duration metric: took 973.565125ms to copy over tarball
	I0731 12:28:28.841195    8683 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:28:30.265991    8683 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.424800042s)
	I0731 12:28:30.266007    8683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:28:30.282834    8683 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:30.286826    8683 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:28:30.292715    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:30.370133    8683 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:32.562963    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:32.562990    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:31.569178    8683 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.199045667s)
	I0731 12:28:31.569275    8683 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:31.588720    8683 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:31.588730    8683 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:31.588736    8683 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:28:31.592706    8683 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:31.594565    8683 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:31.596916    8683 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:28:31.597092    8683 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:31.600260    8683 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:31.600281    8683 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:31.602752    8683 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:28:31.603066    8683 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:31.604581    8683 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:31.604636    8683 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:31.606400    8683 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:31.606515    8683 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:31.607376    8683 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:31.607605    8683 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:31.609138    8683 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:31.609677    8683 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:31.987863    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:28:32.000665    8683 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:28:32.000690    8683 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:28:32.000741    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:28:32.012135    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:28:32.012240    8683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 12:28:32.013809    8683 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:28:32.013820    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:28:32.019511    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:32.023594    8683 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:28:32.023611    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0731 12:28:32.025293    8683 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:32.025423    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:32.025598    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:32.038928    8683 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:28:32.038951    8683 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:32.039003    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:32.062215    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:32.065618    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:32.077157    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:32.082149    8683 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:28:32.082204    8683 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:28:32.082222    8683 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:32.082257    8683 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:28:32.082268    8683 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:32.082274    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:32.082296    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:32.082337    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 12:28:32.082361    8683 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:28:32.082370    8683 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:32.082392    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:32.093734    8683 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:28:32.093755    8683 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:32.093804    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:32.116972    8683 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:28:32.116994    8683 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:32.117049    8683 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:32.122581    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:28:32.122594    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:28:32.122632    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:28:32.122675    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:28:32.122689    8683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:32.122727    8683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:32.132036    8683 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:28:32.132050    8683 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:28:32.132065    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:28:32.132064    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:28:32.132117    8683 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0731 12:28:32.187028    8683 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:32.187132    8683 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:32.224165    8683 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:32.224181    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:28:32.227264    8683 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:28:32.227293    8683 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:32.227348    8683 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:32.329717    8683 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:28:32.465592    8683 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:32.465605    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:28:32.611135    8683 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:28:32.611180    8683 cache_images.go:92] duration metric: took 1.022453583s to LoadCachedImages
	W0731 12:28:32.611222    8683 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0731 12:28:32.611228    8683 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:28:32.611279    8683 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-568000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:28:32.611347    8683 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:28:32.625459    8683 cni.go:84] Creating CNI manager for ""
	I0731 12:28:32.625470    8683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:28:32.625475    8683 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:28:32.625483    8683 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-568000 NodeName:running-upgrade-568000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:28:32.625542    8683 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-568000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:28:32.625604    8683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:28:32.629498    8683 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:28:32.629530    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:28:32.632332    8683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:28:32.637688    8683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:28:32.642724    8683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:28:32.648584    8683 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:28:32.649983    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:32.734653    8683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:28:32.740781    8683 certs.go:68] Setting up /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000 for IP: 10.0.2.15
	I0731 12:28:32.740788    8683 certs.go:194] generating shared ca certs ...
	I0731 12:28:32.740796    8683 certs.go:226] acquiring lock for ca certs: {Name:mk2e60bc5d1dd01990778560005f880e3d93cfec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:32.740937    8683 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key
	I0731 12:28:32.740972    8683 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key
	I0731 12:28:32.740979    8683 certs.go:256] generating profile certs ...
	I0731 12:28:32.741052    8683 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.key
	I0731 12:28:32.741067    8683 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092
	I0731 12:28:32.741084    8683 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:28:32.928997    8683 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 ...
	I0731 12:28:32.929012    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092: {Name:mkdb24f8131ee81d433f06e0864d95e66ab19f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:32.929586    8683 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092 ...
	I0731 12:28:32.929595    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092: {Name:mk150864cffac15216489bfedc4872743595342f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:32.929757    8683 certs.go:381] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt.1b4f6092 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt
	I0731 12:28:32.929894    8683 certs.go:385] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key.1b4f6092 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key
	I0731 12:28:32.930053    8683 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/proxy-client.key
	I0731 12:28:32.930188    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem (1338 bytes)
	W0731 12:28:32.930212    8683 certs.go:480] ignoring /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068_empty.pem, impossibly tiny 0 bytes
	I0731 12:28:32.930217    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 12:28:32.930237    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem (1078 bytes)
	I0731 12:28:32.930255    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:28:32.930274    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem (1679 bytes)
	I0731 12:28:32.930312    8683 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:32.930626    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:28:32.938414    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:28:32.947670    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:28:32.955225    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:28:32.962300    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:28:32.968757    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 12:28:32.975876    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:28:32.983354    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:28:32.990559    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /usr/share/ca-certificates/70682.pem (1708 bytes)
	I0731 12:28:32.997262    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:28:33.004607    8683 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem --> /usr/share/ca-certificates/7068.pem (1338 bytes)
	I0731 12:28:33.011739    8683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:28:33.016858    8683 ssh_runner.go:195] Run: openssl version
	I0731 12:28:33.018658    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7068.pem && ln -fs /usr/share/ca-certificates/7068.pem /etc/ssl/certs/7068.pem"
	I0731 12:28:33.021877    8683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7068.pem
	I0731 12:28:33.023570    8683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:16 /usr/share/ca-certificates/7068.pem
	I0731 12:28:33.023592    8683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7068.pem
	I0731 12:28:33.025396    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7068.pem /etc/ssl/certs/51391683.0"
	I0731 12:28:33.028685    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70682.pem && ln -fs /usr/share/ca-certificates/70682.pem /etc/ssl/certs/70682.pem"
	I0731 12:28:33.032400    8683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70682.pem
	I0731 12:28:33.034148    8683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:16 /usr/share/ca-certificates/70682.pem
	I0731 12:28:33.034170    8683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70682.pem
	I0731 12:28:33.036274    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70682.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:28:33.039412    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:28:33.042752    8683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:33.044525    8683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:27 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:33.044547    8683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:33.046769    8683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:28:33.049793    8683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:28:33.051723    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:28:33.053630    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:28:33.055708    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:28:33.058043    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:28:33.060596    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:28:33.062569    8683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:28:33.064602    8683 kubeadm.go:392] StartCluster: {Name:running-upgrade-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:33.064679    8683 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:33.075245    8683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:28:33.078649    8683 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:28:33.078654    8683 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:28:33.078676    8683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:28:33.082027    8683 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.082324    8683 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-568000" does not appear in /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:28:33.082426    8683 kubeconfig.go:62] /Users/jenkins/minikube-integration/19360-6578/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-568000" cluster setting kubeconfig missing "running-upgrade-568000" context setting]
	I0731 12:28:33.082638    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:33.083035    8683 kapi.go:59] client config for running-upgrade-568000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b981b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:28:33.083396    8683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:28:33.086166    8683 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-568000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:28:33.086172    8683 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:28:33.086210    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:33.098415    8683 docker.go:483] Stopping containers: [1a04823f282c 89ccd9d65c44 5f0265d3c82c 5907695a856e 79af8db7b93f 48a551feeb69 765e46f6d6d5 dc5bd8e47595 ee0d0084b71f e8583e731678 77dcff6a0e07 e35e0efca313 c06d364f5fbd 627669f4b423 204324f27a33 6915e8ffd332 ecf03366161d 4f6055948b7a 294c61dc30d9 886c7a3e1e99 53cd6358decf 538cb5ae476c]
	I0731 12:28:33.098485    8683 ssh_runner.go:195] Run: docker stop 1a04823f282c 89ccd9d65c44 5f0265d3c82c 5907695a856e 79af8db7b93f 48a551feeb69 765e46f6d6d5 dc5bd8e47595 ee0d0084b71f e8583e731678 77dcff6a0e07 e35e0efca313 c06d364f5fbd 627669f4b423 204324f27a33 6915e8ffd332 ecf03366161d 4f6055948b7a 294c61dc30d9 886c7a3e1e99 53cd6358decf 538cb5ae476c
	I0731 12:28:33.110433    8683 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:28:33.202790    8683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:28:33.206986    8683 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 31 19:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 31 19:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 31 19:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 31 19:27 /etc/kubernetes/scheduler.conf
	
	I0731 12:28:33.207018    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf
	I0731 12:28:33.210565    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.210586    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:28:33.214859    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf
	I0731 12:28:33.218180    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.218212    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:28:33.221303    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf
	I0731 12:28:33.223918    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.223944    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:28:33.226495    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf
	I0731 12:28:33.229717    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:33.229741    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:28:33.232538    8683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:28:33.235565    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:33.278789    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:33.778031    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:34.009662    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:34.036046    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:34.058569    8683 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:28:34.058644    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:34.560714    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:35.060410    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:35.560786    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:36.059463    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:37.563508    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:37.563572    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:36.560991    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:37.058873    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:37.560686    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:37.565231    8683 api_server.go:72] duration metric: took 3.506721708s to wait for apiserver process to appear ...
	I0731 12:28:37.565243    8683 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:28:37.565252    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:42.563832    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:42.563909    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:42.567226    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:42.567238    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:47.564358    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:47.564417    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:47.567296    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:47.567318    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:52.565113    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:52.565162    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:52.567455    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:52.567473    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:57.566778    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:57.566886    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:57.567666    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:57.567748    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:02.568192    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:02.568237    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:02.568193    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:02.568237    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:07.569350    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:07.569385    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:07.569350    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:07.569384    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:12.569748    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:12.569796    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:12.570767    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:12.570792    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:17.571718    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:17.571755    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:17.571848    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:17.571902    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:22.572786    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:22.572847    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:22.573374    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:22.573408    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:27.574614    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:27.574743    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:27.593336    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:27.593431    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:27.607065    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:27.607138    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:27.618436    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:27.618501    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:27.629355    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:27.629426    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:27.639507    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:27.639573    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:27.649510    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:27.649576    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:27.666032    8672 logs.go:276] 0 containers: []
	W0731 12:29:27.666045    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:27.666100    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:27.676733    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:27.676752    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:27.676758    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:27.691420    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:27.691431    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:27.703449    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:27.703461    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:27.715374    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:27.715387    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:27.732735    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:27.732746    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:27.746386    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:27.746401    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:27.761248    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:27.761259    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:27.775438    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:27.775451    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:27.787594    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:27.787607    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:27.798586    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:27.798596    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:27.838343    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:27.838352    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:27.843225    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:27.843232    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:27.856186    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:27.856196    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:27.876179    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:27.876189    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:27.887407    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:27.887421    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:27.912456    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:27.912463    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:28.011385    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:28.011396    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:30.555064    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:27.574481    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:27.574533    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:35.557343    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:35.557503    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:35.576263    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:35.576370    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:35.590908    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:35.590982    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:35.607975    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:35.608057    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:35.620570    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:35.620644    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:35.634125    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:35.634195    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:35.644752    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:35.644823    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:35.654540    8672 logs.go:276] 0 containers: []
	W0731 12:29:35.654552    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:35.654602    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:35.664851    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:35.664871    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:35.664877    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:35.676309    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:35.676321    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:35.702581    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:35.702588    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:35.740997    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:35.741004    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:35.745362    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:35.745371    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:35.759971    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:35.759982    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:35.773372    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:35.773381    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:35.784653    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:35.784661    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:35.795530    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:35.795542    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:35.831470    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:35.831485    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:35.871710    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:35.871722    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:35.883353    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:35.883364    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:35.898815    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:35.898826    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:35.911730    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:35.911742    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:35.926329    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:35.926339    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:35.938891    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:35.938905    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:35.953701    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:35.953709    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:32.576733    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:32.576778    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:38.473151    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:37.578925    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:37.579104    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:37.595118    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:29:37.595195    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:37.608107    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:29:37.608183    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:37.619409    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:29:37.619484    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:37.629921    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:29:37.629988    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:37.640232    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:29:37.640309    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:37.650940    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:29:37.651010    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:37.660887    8683 logs.go:276] 0 containers: []
	W0731 12:29:37.660901    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:37.660960    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:37.671782    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:29:37.671801    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:29:37.671808    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:29:37.692342    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:29:37.692353    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:29:37.703556    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:29:37.703567    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:29:37.715882    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:29:37.715892    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:29:37.727302    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:37.727313    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:37.766791    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:37.766799    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:37.771591    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:37.771599    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:37.872374    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:29:37.872388    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:29:37.884847    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:29:37.884858    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:29:37.902726    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:29:37.902734    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:29:37.914987    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:37.914997    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:37.942344    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:29:37.942352    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:29:37.955948    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:29:37.955958    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:29:37.982163    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:29:37.982173    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:29:37.997076    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:29:37.997086    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:29:38.009083    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:29:38.009094    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:29:38.028475    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:29:38.028491    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:29:38.041482    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:29:38.041492    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:29:38.053295    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:29:38.053307    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:40.566338    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:43.475526    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:43.475840    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:43.504249    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:43.504378    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:43.522520    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:43.522625    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:43.537660    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:43.537735    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:43.549139    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:43.549201    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:43.564441    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:43.564514    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:43.575043    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:43.575114    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:43.585241    8672 logs.go:276] 0 containers: []
	W0731 12:29:43.585255    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:43.585315    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:43.596372    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:43.596391    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:43.596397    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:43.607608    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:43.607641    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:43.618710    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:43.618720    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:43.642856    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:43.642863    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:43.660982    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:43.660993    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:43.672176    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:43.672186    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:43.707218    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:43.707232    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:43.725013    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:43.725026    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:43.738858    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:43.738870    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:43.750817    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:43.750830    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:43.788016    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:43.788025    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:43.798577    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:43.798587    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:43.813199    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:43.813210    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:43.828245    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:43.828259    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:43.868236    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:43.868245    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:43.872756    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:43.872762    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:43.889038    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:43.889048    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:45.566663    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:45.566876    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:45.592654    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:29:45.592766    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:45.608910    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:29:45.608999    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:45.625614    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:29:45.625685    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:45.636129    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:29:45.636199    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:45.646875    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:29:45.646977    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:45.657341    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:29:45.657413    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:45.668256    8683 logs.go:276] 0 containers: []
	W0731 12:29:45.668268    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:45.668328    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:45.678846    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:29:45.678864    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:29:45.678869    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:29:45.696571    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:29:45.696581    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:29:45.709313    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:45.709324    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:45.750659    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:45.750668    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:45.755641    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:29:45.755649    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:29:45.773727    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:29:45.773740    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:29:45.784882    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:29:45.784893    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:29:45.797063    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:29:45.797075    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:29:45.809465    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:45.809478    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:45.835913    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:45.835920    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:45.874641    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:29:45.874655    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:29:45.900609    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:29:45.900621    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:29:45.914855    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:29:45.914868    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:29:45.926482    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:29:45.926505    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:29:45.938311    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:29:45.938322    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:29:45.949544    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:29:45.949554    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:45.962031    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:29:45.962047    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:29:45.975607    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:29:45.975619    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:29:45.987617    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:29:45.987627    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:29:46.409650    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:48.514865    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:51.411930    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:51.412137    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:51.435581    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:51.435683    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:51.452111    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:51.452203    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:51.464636    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:51.464696    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:51.475551    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:51.475630    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:51.485928    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:51.486000    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:51.500043    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:51.500112    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:51.510644    8672 logs.go:276] 0 containers: []
	W0731 12:29:51.510654    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:51.510710    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:51.521583    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:51.521599    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:51.521604    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:51.538308    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:51.538322    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:51.565001    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:51.565015    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:51.600775    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:51.600789    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:51.616227    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:51.616240    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:51.628503    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:51.628515    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:51.639646    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:51.639661    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:51.651367    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:51.651377    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:51.666096    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:51.666106    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:51.682818    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:51.682829    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:51.694147    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:51.694157    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:51.707668    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:51.707678    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:51.724929    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:51.724940    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:51.761390    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:51.761397    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:51.765307    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:51.765312    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:51.779024    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:51.779033    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:51.822176    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:51.822186    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:54.336016    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:53.517481    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:53.517951    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:53.567277    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:29:53.567408    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:53.593729    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:29:53.593821    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:53.606030    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:29:53.606107    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:53.622493    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:29:53.622577    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:53.633631    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:29:53.633701    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:53.644375    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:29:53.644450    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:53.655010    8683 logs.go:276] 0 containers: []
	W0731 12:29:53.655021    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:53.655078    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:53.666390    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:29:53.666406    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:53.666412    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:53.671244    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:29:53.671252    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:29:53.683979    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:53.683992    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:53.723566    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:29:53.723575    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:29:53.735533    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:53.735543    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:53.762285    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:29:53.762295    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:29:53.787858    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:29:53.787870    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:29:53.802588    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:29:53.802604    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:29:53.814410    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:29:53.814421    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:29:53.833453    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:29:53.833465    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:29:53.850361    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:29:53.850373    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:29:53.862178    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:29:53.862188    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:53.874381    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:29:53.874392    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:29:53.888215    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:29:53.888227    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:29:53.901963    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:29:53.901973    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:29:53.913538    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:29:53.913550    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:29:53.934001    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:29:53.934015    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:29:53.946302    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:29:53.946316    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:29:53.957708    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:53.957719    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:59.338229    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:59.338435    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:59.352368    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:59.352451    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:59.363845    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:59.363918    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:59.374248    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:59.374313    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:59.384446    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:59.384521    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:59.394563    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:59.394627    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:59.404815    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:59.404883    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:59.415053    8672 logs.go:276] 0 containers: []
	W0731 12:29:59.415064    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:59.415116    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:59.425759    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:59.425774    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:59.425781    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:59.463723    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:59.463736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:59.502033    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:59.502043    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:59.514270    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:59.514280    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:59.531815    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:59.531827    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:59.535947    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:59.535955    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:59.550435    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:59.550444    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:59.564697    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:59.564709    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:59.581987    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:59.581998    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:59.618840    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:59.618850    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:59.633267    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:59.633281    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:59.651778    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:59.651791    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:59.664888    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:59.664898    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:59.688316    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:59.688322    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:59.699966    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:59.699976    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:59.714965    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:59.714976    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:59.726866    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:59.726877    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:56.498440    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:02.240172    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:01.500891    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:01.501054    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:01.518612    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:01.518704    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:01.533659    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:01.533728    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:01.544362    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:01.544431    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:01.555043    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:01.555112    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:01.573142    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:01.573209    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:01.583117    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:01.583190    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:01.594587    8683 logs.go:276] 0 containers: []
	W0731 12:30:01.594599    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:01.594655    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:01.605327    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:01.605343    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:01.605349    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:01.616810    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:01.616823    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:01.660486    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:01.660498    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:01.666248    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:01.666259    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:01.682167    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:01.682185    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:01.697737    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:01.697747    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:01.715001    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:01.715014    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:01.726940    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:01.726950    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:01.744313    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:01.744323    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:01.756496    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:01.756507    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:01.768352    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:01.768363    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:01.779532    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:01.779542    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:01.813743    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:01.813753    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:01.839188    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:01.839199    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:01.853853    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:01.853863    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:01.872290    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:01.872302    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:01.897863    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:01.897873    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:01.912655    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:01.912665    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:01.924363    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:01.924373    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:04.438310    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:07.242351    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:07.242564    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:07.259938    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:07.260028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:07.273614    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:07.273679    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:07.284410    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:07.284500    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:07.294375    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:07.294441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:07.304942    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:07.305008    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:07.316389    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:07.316451    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:07.326632    8672 logs.go:276] 0 containers: []
	W0731 12:30:07.326641    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:07.326690    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:07.337402    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:07.337416    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:07.337421    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:07.363292    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:07.363303    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:07.403702    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:07.403716    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:07.407891    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:07.407897    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:07.421652    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:07.421669    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:07.459462    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:07.459472    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:07.470860    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:07.470872    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:07.482813    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:07.482824    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:07.497243    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:07.497253    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:07.509297    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:07.509307    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:07.524439    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:07.524450    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:07.535725    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:07.535736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:07.550457    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:07.550466    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:07.586772    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:07.586782    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:07.600703    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:07.600713    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:07.612656    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:07.612667    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:07.630197    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:07.630212    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:10.144496    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:09.440551    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:09.440704    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:09.462902    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:09.462980    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:09.474889    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:09.474964    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:09.485271    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:09.485341    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:09.495790    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:09.495856    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:09.506251    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:09.506316    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:09.517066    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:09.517135    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:09.527888    8683 logs.go:276] 0 containers: []
	W0731 12:30:09.527899    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:09.527954    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:09.538761    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:09.538778    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:09.538785    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:09.578262    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:09.578272    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:09.592203    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:09.592215    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:09.603763    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:09.603774    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:09.615935    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:09.615945    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:09.627972    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:09.627982    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:09.639538    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:09.639555    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:09.665091    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:09.665097    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:09.669328    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:09.669337    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:09.683526    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:09.683536    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:09.708235    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:09.708246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:09.730113    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:09.730123    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:09.747847    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:09.747861    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:09.759693    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:09.759705    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:09.782355    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:09.782369    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:09.793644    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:09.793659    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:09.828760    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:09.828776    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:09.840245    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:09.840256    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:09.851845    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:09.851858    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:15.146390    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:15.146808    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:15.179007    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:15.179137    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:15.198576    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:15.198671    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:15.213511    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:15.213576    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:15.231749    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:15.231807    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:15.242522    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:15.242579    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:15.253599    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:15.253671    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:15.263621    8672 logs.go:276] 0 containers: []
	W0731 12:30:15.263631    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:15.263679    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:15.273980    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:15.273996    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:15.274002    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:15.312278    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:15.312293    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:15.349805    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:15.349816    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:15.375198    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:15.375210    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:15.387020    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:15.387030    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:15.405068    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:15.405082    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:15.416459    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:15.416470    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:15.428073    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:15.428086    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:15.439565    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:15.439575    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:15.459823    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:15.459834    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:15.493832    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:15.493848    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:15.508579    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:15.508590    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:15.523001    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:15.523017    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:15.534278    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:15.534290    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:15.559142    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:15.559151    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:15.563236    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:15.563242    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:15.577698    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:15.577712    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:12.370042    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:18.094673    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:17.372421    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:17.372729    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:17.405521    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:17.405651    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:17.426033    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:17.426115    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:17.442559    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:17.442635    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:17.453921    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:17.453989    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:17.464686    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:17.464755    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:17.475652    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:17.475721    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:17.485973    8683 logs.go:276] 0 containers: []
	W0731 12:30:17.485983    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:17.486045    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:17.501529    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:17.501544    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:17.501550    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:17.515933    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:17.515947    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:17.528545    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:17.528558    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:17.541515    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:17.541528    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:17.566204    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:17.566214    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:17.601270    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:17.601283    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:17.612523    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:17.612533    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:17.623543    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:17.623554    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:17.641261    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:17.641272    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:17.666075    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:17.666086    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:17.685866    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:17.685876    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:17.697979    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:17.697990    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:17.709755    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:17.709766    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:17.751189    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:17.751207    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:17.758458    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:17.758469    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:17.772758    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:17.772769    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:17.784498    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:17.784509    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:17.801483    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:17.801493    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:17.813246    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:17.813259    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:20.328338    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:23.096993    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:23.097106    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:23.109281    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:23.109353    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:23.125701    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:23.125770    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:23.135987    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:23.136049    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:23.147048    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:23.147118    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:23.157578    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:23.157644    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:23.168063    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:23.168128    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:23.178257    8672 logs.go:276] 0 containers: []
	W0731 12:30:23.178269    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:23.178320    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:23.191866    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:23.191884    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:23.191891    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:23.228955    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:23.228966    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:23.247239    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:23.247250    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:23.258648    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:23.258657    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:23.274835    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:23.274845    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:23.285999    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:23.286010    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:23.290546    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:23.290556    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:23.306477    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:23.306487    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:23.321230    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:23.321240    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:23.338698    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:23.338710    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:23.350426    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:23.350436    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:23.367946    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:23.367958    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:23.379408    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:23.379419    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:23.391198    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:23.391212    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:23.402625    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:23.402633    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:23.439317    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:23.439329    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:23.477738    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:23.477748    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:26.002312    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:25.330703    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:25.331012    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:25.362240    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:25.362373    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:25.380040    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:25.380141    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:25.399544    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:25.399627    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:25.412441    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:25.412518    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:25.423303    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:25.423369    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:25.434346    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:25.434418    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:25.445202    8683 logs.go:276] 0 containers: []
	W0731 12:30:25.445215    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:25.445284    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:25.455849    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:25.455865    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:25.455870    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:25.469894    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:25.469908    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:25.509665    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:25.509674    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:25.523213    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:25.523228    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:25.527651    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:25.527657    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:25.552276    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:25.552289    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:25.565547    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:25.565557    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:25.582566    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:25.582576    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:25.594205    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:25.594216    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:25.605704    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:25.605715    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:25.617985    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:25.617994    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:25.629361    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:25.629373    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:25.671519    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:25.671529    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:25.685989    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:25.685999    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:25.697442    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:25.697455    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:25.709554    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:25.709564    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:25.721654    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:25.721664    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:25.739756    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:25.739768    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:25.751548    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:25.751556    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:31.004604    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:31.004803    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:31.023884    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:31.023980    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:31.038160    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:31.038239    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:31.050499    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:31.050571    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:31.061384    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:31.061453    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:31.072265    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:31.072337    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:31.082643    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:31.082715    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:31.092921    8672 logs.go:276] 0 containers: []
	W0731 12:30:31.092933    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:31.092994    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:31.103625    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:31.103642    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:31.103649    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:31.145554    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:31.145566    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:31.159893    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:31.159905    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:31.174699    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:31.174711    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:28.279930    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:31.186419    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:31.186430    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:31.198063    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:31.198074    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:31.236177    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:31.236186    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:31.251933    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:31.251944    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:31.269051    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:31.269061    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:31.280134    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:31.280144    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:31.317216    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:31.317226    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:31.321730    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:31.321737    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:31.332848    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:31.332860    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:31.344908    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:31.344921    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:31.370581    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:31.370591    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:31.385016    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:31.385031    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:31.403663    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:31.403675    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:33.917727    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:33.282310    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:33.282454    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:33.301282    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:33.301353    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:33.312385    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:33.312458    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:33.322580    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:33.322653    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:33.336877    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:33.336943    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:33.350479    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:33.350538    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:33.363300    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:33.363367    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:33.373860    8683 logs.go:276] 0 containers: []
	W0731 12:30:33.373872    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:33.373932    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:33.384875    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:33.384888    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:33.384893    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:33.409677    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:33.409689    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:33.422362    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:33.422376    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:33.438808    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:33.438818    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:33.450083    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:33.450092    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:33.461001    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:33.461012    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:33.486514    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:33.486522    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:33.527990    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:33.528007    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:33.542434    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:33.542446    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:33.556839    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:33.556852    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:33.576593    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:33.576604    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:33.612273    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:33.612286    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:33.629113    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:33.629124    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:33.640757    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:33.640767    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:33.645017    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:33.645024    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:33.658741    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:33.658752    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:33.675255    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:33.675267    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:33.687437    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:33.687450    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:33.706006    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:33.706017    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:36.220854    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:38.919860    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:38.920044    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:38.940722    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:38.940811    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:38.953461    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:38.953535    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:38.964615    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:38.964687    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:38.975919    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:38.975992    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:38.986261    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:38.986330    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:38.996590    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:38.996658    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:39.008116    8672 logs.go:276] 0 containers: []
	W0731 12:30:39.008127    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:39.008181    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:39.018506    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:39.018525    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:39.018531    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:39.033475    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:39.033487    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:39.051351    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:39.051360    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:39.062906    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:39.062918    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:39.086480    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:39.086488    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:39.097842    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:39.097851    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:39.111289    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:39.111300    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:39.148427    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:39.148436    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:39.159986    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:39.159997    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:39.171703    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:39.171716    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:39.175960    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:39.175967    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:39.212416    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:39.212431    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:39.227254    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:39.227264    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:39.238803    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:39.238812    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:39.277005    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:39.277015    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:39.294832    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:39.294844    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:39.309456    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:39.309467    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:41.223063    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:41.223234    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:41.241186    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:41.241273    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:41.254319    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:41.254396    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:41.265507    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:41.265576    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:41.276668    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:41.276736    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:41.287723    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:41.287789    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:41.298740    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:41.298805    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:41.309019    8683 logs.go:276] 0 containers: []
	W0731 12:30:41.309031    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:41.309096    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:41.318900    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:41.318917    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:41.318923    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:41.330528    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:41.330538    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:41.345228    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:41.345242    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:41.825063    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:41.369533    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:41.369543    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:41.387653    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:41.387667    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:41.399614    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:41.399625    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:41.411092    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:41.411106    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:41.422897    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:41.422908    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:41.459915    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:41.459926    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:41.474226    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:41.474236    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:41.486469    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:41.486479    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:41.500460    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:41.500472    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:41.515478    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:41.515490    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:41.540876    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:41.540893    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:41.575919    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:41.575934    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:41.588100    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:41.588114    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:41.629561    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:41.629569    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:41.633871    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:41.633877    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:41.651466    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:41.651481    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:44.165932    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:46.827336    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:46.827537    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:46.845587    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:46.845682    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:46.860025    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:46.860099    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:46.871308    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:46.871383    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:46.881549    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:46.881618    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:46.891614    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:46.891679    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:46.902009    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:46.902079    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:46.911788    8672 logs.go:276] 0 containers: []
	W0731 12:30:46.911797    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:46.911852    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:46.922290    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:46.922309    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:46.922315    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:46.936681    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:46.936692    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:46.973389    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:46.973400    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:46.988277    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:46.988287    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:47.005843    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:47.005853    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:47.020122    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:47.020135    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:47.032162    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:47.032172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:47.043994    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:47.044005    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:47.048022    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:47.048029    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:47.059950    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:47.059965    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:47.074324    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:47.074334    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:47.088725    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:47.088736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:47.100058    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:47.100069    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:47.124226    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:47.124233    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:47.158469    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:47.158481    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:47.174743    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:47.174753    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:47.191052    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:47.191062    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:49.732684    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:49.168330    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:49.168735    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:49.202691    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:49.202828    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:49.221604    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:49.221708    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:49.237247    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:49.237327    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:49.249752    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:49.249829    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:49.260900    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:49.260961    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:49.271477    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:49.271541    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:49.282143    8683 logs.go:276] 0 containers: []
	W0731 12:30:49.282156    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:49.282219    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:49.292606    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:49.292621    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:49.292626    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:49.328229    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:49.328240    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:49.343858    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:49.343871    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:49.355215    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:49.355227    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:49.373058    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:49.373072    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:49.385861    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:49.385874    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:49.398478    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:49.398490    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:49.416001    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:49.416013    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:49.427543    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:49.427557    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:49.432140    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:49.432146    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:49.457260    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:49.457270    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:49.471204    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:49.471214    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:49.490010    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:49.490020    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:49.501795    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:49.501806    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:30:49.512980    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:49.512992    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:49.525939    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:49.525950    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:49.537125    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:49.537139    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:49.562565    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:49.562575    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:49.604055    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:49.604065    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:54.734934    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:54.735163    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:54.752617    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:54.752708    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:54.765997    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:54.766063    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:54.777279    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:54.777346    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:54.787894    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:54.787966    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:54.798594    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:54.798657    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:54.809591    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:54.809662    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:54.828133    8672 logs.go:276] 0 containers: []
	W0731 12:30:54.828143    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:54.828204    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:54.838067    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:54.838083    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:54.838089    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:54.875571    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:54.875584    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:54.909575    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:54.909586    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:54.921393    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:54.921405    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:54.932737    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:54.932748    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:54.943888    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:54.943900    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:54.967624    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:54.967631    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:54.979503    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:54.979519    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:54.993814    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:54.993824    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:55.011947    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:55.011959    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:55.025804    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:55.025814    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:55.037825    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:55.037836    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:55.052572    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:55.052581    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:55.067846    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:55.067857    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:55.072231    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:55.072237    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:55.109060    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:55.109073    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:55.129724    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:55.129736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:52.121680    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:57.643856    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:57.123985    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:57.124292    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:57.160649    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:30:57.160782    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:57.181486    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:30:57.181572    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:57.196331    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:30:57.196417    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:57.208273    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:30:57.208348    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:57.219383    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:30:57.219455    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:57.231620    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:30:57.231693    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:57.245188    8683 logs.go:276] 0 containers: []
	W0731 12:30:57.245199    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:57.245261    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:57.256019    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:30:57.256034    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:30:57.256040    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:30:57.268197    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:30:57.268207    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:30:57.279752    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:30:57.279764    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:30:57.292896    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:30:57.292907    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:57.305087    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:57.305097    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:57.343489    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:30:57.343501    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:30:57.379889    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:57.379909    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:57.409448    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:57.409460    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:57.448798    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:30:57.448808    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:30:57.466664    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:30:57.466675    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:30:57.481310    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:30:57.481322    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:30:57.495996    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:30:57.496018    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:30:57.507689    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:30:57.507701    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:30:57.519084    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:30:57.519095    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:30:57.531401    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:57.531413    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:57.536082    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:30:57.536088    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:30:57.561644    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:30:57.561657    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:30:57.581300    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:30:57.581313    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:30:57.592740    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:30:57.592751    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:00.106303    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:02.646041    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:02.646261    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:02.671412    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:02.671531    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:02.688275    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:02.688359    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:02.702130    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:02.702191    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:02.713397    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:02.713470    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:02.727677    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:02.727743    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:02.741717    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:02.741783    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:02.755591    8672 logs.go:276] 0 containers: []
	W0731 12:31:02.755605    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:02.755664    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:02.766225    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:02.766240    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:02.766245    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:02.800231    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:02.800242    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:02.816315    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:02.816327    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:02.830396    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:02.830422    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:02.844504    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:02.844516    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:02.859192    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:02.859205    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:02.871392    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:02.871403    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:02.883289    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:02.883301    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:02.921295    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:02.921304    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:02.932428    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:02.932438    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:02.944514    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:02.944523    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:02.960979    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:02.960993    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:02.972565    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:02.972575    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:03.010865    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:03.010872    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:03.015233    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:03.015239    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:03.032597    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:03.032607    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:03.044205    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:03.044219    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:05.569099    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:05.108931    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:05.109427    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:05.147908    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:05.148048    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:05.168242    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:05.168349    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:05.183900    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:05.183988    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:05.196509    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:05.196587    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:05.208384    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:05.208458    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:05.219377    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:05.219446    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:05.230020    8683 logs.go:276] 0 containers: []
	W0731 12:31:05.230031    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:05.230090    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:05.241313    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:05.241329    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:05.241333    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:05.253062    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:05.253076    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:05.277516    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:05.277527    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:05.318990    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:05.319000    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:05.355536    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:05.355547    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:05.372747    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:05.372760    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:05.384579    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:05.384592    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:05.395932    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:05.395944    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:05.407820    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:05.407830    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:05.426809    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:05.426818    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:05.439879    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:05.439891    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:05.451207    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:05.451217    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:05.463324    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:05.463336    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:05.474997    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:05.475007    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:05.479571    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:05.479580    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:05.493854    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:05.493864    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:05.507741    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:05.507751    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:05.522047    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:05.522059    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:05.547798    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:05.547810    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:10.571202    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:10.571322    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:10.587425    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:10.587503    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:10.597940    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:10.598011    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:10.608810    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:10.608882    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:10.619104    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:10.619178    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:10.629691    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:10.629756    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:10.639890    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:10.639960    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:10.651087    8672 logs.go:276] 0 containers: []
	W0731 12:31:10.651098    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:10.651159    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:10.661447    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:10.661469    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:10.661478    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:10.701140    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:10.701151    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:10.737008    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:10.737024    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:10.748945    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:10.748957    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:10.767998    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:10.768010    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:10.782994    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:10.783004    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:10.801620    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:10.801631    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:10.824470    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:10.824478    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:10.835831    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:10.835840    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:10.847266    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:10.847279    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:10.859417    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:10.859428    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:10.896002    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:10.896012    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:10.920591    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:10.920604    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:10.924855    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:10.924864    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:10.938785    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:10.938797    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:10.979695    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:10.979706    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:10.994159    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:10.994168    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:08.063797    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:13.506830    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:13.065001    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:13.065409    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:13.112019    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:13.112154    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:13.132886    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:13.133006    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:13.152599    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:13.152678    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:13.164418    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:13.164497    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:13.177309    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:13.177393    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:13.188297    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:13.188373    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:13.199202    8683 logs.go:276] 0 containers: []
	W0731 12:31:13.199214    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:13.199270    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:13.210260    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:13.210274    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:13.210279    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:13.224884    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:13.224894    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:13.240235    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:13.240246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:13.251815    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:13.251827    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:13.270860    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:13.270872    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:13.296926    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:13.296937    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:13.308973    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:13.308984    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:13.321109    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:13.321119    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:13.333227    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:13.333239    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:13.345156    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:13.345166    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:13.356627    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:13.356637    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:13.376663    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:13.376673    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:13.419057    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:13.419064    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:13.423640    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:13.423649    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:13.460083    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:13.460093    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:13.484047    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:13.484057    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:13.502349    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:13.502359    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:13.520737    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:13.520750    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:13.546687    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:13.546702    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:16.063920    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:18.509029    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:18.509230    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:18.525906    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:18.525988    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:18.537423    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:18.537493    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:18.547846    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:18.547919    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:18.558299    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:18.558371    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:18.568643    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:18.568720    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:18.579108    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:18.579174    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:18.589503    8672 logs.go:276] 0 containers: []
	W0731 12:31:18.589513    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:18.589570    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:18.599861    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:18.599877    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:18.599883    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:18.613807    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:18.613819    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:18.650620    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:18.650635    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:18.662336    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:18.662346    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:18.676767    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:18.676780    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:18.701240    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:18.701246    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:18.713888    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:18.713898    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:18.718174    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:18.718185    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:18.732539    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:18.732550    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:18.744143    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:18.744154    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:18.759457    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:18.759466    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:18.771609    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:18.771619    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:18.808294    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:18.808302    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:18.844069    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:18.844083    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:18.862597    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:18.862608    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:18.874491    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:18.874501    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:18.891487    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:18.891496    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:21.064724    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:21.064889    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:21.077872    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:21.077949    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:21.089735    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:21.089805    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:21.100028    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:21.100100    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:21.110764    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:21.110836    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:21.121455    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:21.121527    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:21.132579    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:21.132649    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:21.142791    8683 logs.go:276] 0 containers: []
	W0731 12:31:21.142806    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:21.142864    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:21.153716    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:21.153732    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:21.153737    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:21.158738    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:21.158746    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:21.194266    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:21.194278    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:21.219485    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:21.219496    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:21.231893    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:21.231904    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:21.244793    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:21.244807    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:21.262497    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:21.262507    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:21.280149    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:21.280160    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:21.321108    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:21.321115    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:21.347117    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:21.347128    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:21.404653    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:21.361927    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:21.361938    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:21.373283    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:21.373295    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:21.384489    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:21.384498    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:21.395344    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:21.395354    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:21.414032    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:21.414041    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:21.432271    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:21.432281    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:21.450015    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:21.450026    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:21.461630    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:21.461641    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:21.473312    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:21.473323    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:24.000639    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:26.406823    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:26.406969    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:26.421590    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:26.421669    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:26.435839    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:26.435913    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:26.446279    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:26.446350    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:26.456642    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:26.456716    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:26.467177    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:26.467243    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:26.477393    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:26.477461    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:26.487674    8672 logs.go:276] 0 containers: []
	W0731 12:31:26.487686    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:26.487740    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:26.497901    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:26.497918    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:26.497926    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:26.512247    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:26.512257    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:26.525417    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:26.525429    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:26.560081    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:26.560098    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:26.574621    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:26.574634    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:26.613582    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:26.613591    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:26.628435    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:26.628446    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:26.652285    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:26.652295    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:26.664599    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:26.664610    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:26.704058    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:26.704065    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:26.707910    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:26.707916    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:26.728106    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:26.728118    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:26.742782    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:26.742798    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:26.754472    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:26.754482    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:26.773471    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:26.773481    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:26.785403    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:26.785419    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:26.798881    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:26.798891    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:29.312236    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:29.003085    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:29.003540    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:29.043520    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:29.043665    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:29.067009    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:29.067122    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:29.084097    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:29.084175    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:29.096233    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:29.096309    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:29.108830    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:29.108905    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:29.119765    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:29.119832    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:29.130602    8683 logs.go:276] 0 containers: []
	W0731 12:31:29.130612    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:29.130670    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:29.141952    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:29.141966    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:29.141971    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:29.183892    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:29.183899    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:29.187857    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:29.187864    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:29.199328    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:29.199341    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:29.212045    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:29.212057    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:29.230964    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:29.230974    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:29.242859    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:29.242871    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:29.257086    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:29.257100    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:29.268885    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:29.268896    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:29.280356    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:29.280367    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:29.297357    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:29.297368    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:29.309402    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:29.309414    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:29.322587    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:29.322599    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:29.346126    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:29.346134    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:29.382007    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:29.382018    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:29.396715    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:29.396725    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:29.420918    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:29.420929    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:29.434994    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:29.435005    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:29.452549    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:29.452560    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:34.314412    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:34.314563    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:34.326387    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:34.326472    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:34.338572    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:34.338650    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:34.348782    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:34.348856    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:34.359412    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:34.359492    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:34.370125    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:34.370197    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:34.383366    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:34.383441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:34.393038    8672 logs.go:276] 0 containers: []
	W0731 12:31:34.393050    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:34.393110    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:34.403722    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:34.403739    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:34.403744    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:34.443993    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:34.444015    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:34.459570    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:34.459583    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:34.471969    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:34.471983    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:34.483314    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:34.483324    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:34.521633    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:34.521644    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:34.557325    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:34.557337    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:34.569011    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:34.569022    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:34.583128    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:34.583145    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:34.600864    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:34.600873    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:34.612493    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:34.612504    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:34.627141    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:34.627152    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:34.642047    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:34.642056    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:34.646557    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:34.646563    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:34.660584    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:34.660595    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:34.673018    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:34.673028    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:34.685249    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:34.685259    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:31.966117    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:37.210590    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:36.968412    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:36.968652    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:36.993202    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:36.993344    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:37.012124    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:37.012199    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:37.024392    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:37.024467    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:37.035581    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:37.035655    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:37.046182    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:37.046252    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:37.056806    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:37.056871    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:37.072536    8683 logs.go:276] 0 containers: []
	W0731 12:31:37.072547    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:37.072602    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:37.082792    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:37.082809    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:37.082814    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:37.096500    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:37.096510    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:37.113881    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:37.113892    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:37.125789    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:37.125800    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:37.140256    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:37.140265    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:37.151073    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:37.151082    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:37.167781    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:37.167793    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:37.181193    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:37.181205    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:37.218283    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:37.218293    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:37.230273    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:37.230283    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:37.254964    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:37.254971    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:37.278952    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:37.278962    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:37.290929    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:37.290941    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:37.302717    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:37.302728    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:37.316006    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:37.316020    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:37.355139    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:37.355146    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:37.359301    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:37.359306    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:37.373093    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:37.373102    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:37.387509    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:37.387522    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:39.908750    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:42.212759    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:42.212957    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:42.233100    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:42.233184    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:42.245360    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:42.245441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:42.260626    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:42.260700    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:42.272089    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:42.272167    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:42.282355    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:42.282427    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:42.293177    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:42.293246    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:42.304209    8672 logs.go:276] 0 containers: []
	W0731 12:31:42.304220    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:42.304280    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:42.314384    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:42.314400    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:42.314405    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:42.325374    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:42.325386    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:42.365587    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:42.365599    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:42.401209    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:42.401221    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:42.416782    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:42.416792    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:42.433906    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:42.433917    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:42.451225    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:42.451234    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:42.455263    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:42.455270    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:42.492864    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:42.492874    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:42.506779    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:42.506790    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:42.518485    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:42.518496    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:42.530818    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:42.530828    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:42.542654    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:42.542670    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:42.553924    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:42.553935    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:42.576928    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:42.576938    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:42.588404    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:42.588416    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:42.604712    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:42.604723    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:45.124796    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:44.911328    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:44.911562    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:44.943489    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:44.943616    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:44.958807    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:44.958906    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:44.971180    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:44.971258    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:44.981997    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:44.982068    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:44.992521    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:44.992606    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:45.003238    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:45.003321    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:45.013746    8683 logs.go:276] 0 containers: []
	W0731 12:31:45.013759    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:45.013821    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:45.025115    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:45.025131    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:45.025136    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:45.037924    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:45.037940    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:45.052117    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:45.052132    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:45.066803    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:45.066817    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:45.078529    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:45.078543    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:45.096814    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:45.096825    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:45.108471    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:45.108485    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:45.113444    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:45.113451    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:45.124815    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:45.124823    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:45.142133    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:45.142147    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:45.153766    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:45.153776    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:45.194338    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:45.194346    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:45.218660    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:45.218671    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:45.242338    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:45.242348    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:45.253787    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:45.253801    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:45.290267    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:45.290277    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:45.304680    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:45.304690    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:45.316930    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:45.316941    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:45.335351    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:45.335361    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:50.126967    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:50.127121    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:50.156250    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:50.156344    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:50.171665    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:50.171736    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:50.182660    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:50.182724    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:50.192941    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:50.193011    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:50.203709    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:50.203776    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:50.214658    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:50.214723    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:50.225314    8672 logs.go:276] 0 containers: []
	W0731 12:31:50.225326    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:50.225386    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:50.236095    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:50.236113    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:50.236120    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:50.247811    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:50.247822    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:50.271211    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:50.271218    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:50.310135    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:50.310146    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:50.314229    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:50.314237    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:50.353512    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:50.353523    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:50.368581    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:50.368592    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:50.381301    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:50.381316    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:50.393073    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:50.393083    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:50.428890    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:50.428905    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:50.443276    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:50.443285    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:50.454150    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:50.454162    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:50.472861    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:50.472873    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:50.486597    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:50.486607    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:50.500887    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:50.500898    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:50.512711    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:50.512724    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:50.530012    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:50.530027    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:47.849230    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:53.048240    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:52.851652    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:52.852021    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:52.883359    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:31:52.883489    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:52.900739    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:31:52.900835    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:52.914777    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:31:52.914859    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:52.931224    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:31:52.931298    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:52.941249    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:31:52.941320    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:52.952297    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:31:52.952369    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:52.962869    8683 logs.go:276] 0 containers: []
	W0731 12:31:52.962879    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:52.962937    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:52.974100    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:31:52.974115    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:31:52.974120    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:31:52.993013    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:31:52.993023    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:31:53.004676    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:31:53.004689    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:31:53.016725    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:31:53.016734    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:31:53.032980    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:31:53.032991    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:31:53.052074    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:31:53.052082    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:31:53.063638    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:31:53.063655    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:31:53.075630    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:31:53.075640    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:31:53.086769    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:53.086779    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:53.111308    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:31:53.111316    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:53.123990    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:31:53.124002    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:31:53.148886    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:31:53.148896    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:31:53.160135    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:31:53.160148    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:31:53.177662    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:31:53.177675    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:31:53.191696    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:31:53.191707    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:31:53.205433    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:31:53.205445    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:31:53.216992    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:53.217004    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:53.256982    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:53.256992    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:53.261830    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:53.261837    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:55.797990    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:58.050363    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:58.050563    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:58.071690    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:58.071792    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:58.086355    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:58.086434    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:58.097910    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:58.097979    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:58.109552    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:58.109617    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:58.120066    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:58.120123    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:58.136295    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:58.136402    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:58.148389    8672 logs.go:276] 0 containers: []
	W0731 12:31:58.148399    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:58.148456    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:58.159291    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:58.159307    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:58.159314    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:58.177160    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:58.177171    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:58.188199    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:58.188212    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:58.212141    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:58.212158    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:58.257230    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:58.257241    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:58.261653    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:58.261659    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:58.298241    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:58.298252    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:58.313097    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:58.313110    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:58.324810    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:58.324822    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:58.348185    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:58.348195    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:58.388040    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:58.388054    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:58.399802    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:58.399820    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:58.411027    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:58.411037    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:58.422641    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:58.422656    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:58.439912    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:58.439926    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:58.454884    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:58.454893    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:58.466303    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:58.466317    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:00.980197    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:00.799323    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:00.799504    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:00.812379    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:00.812461    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:00.826344    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:00.826413    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:00.837077    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:00.837151    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:00.848019    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:00.848094    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:00.859167    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:00.859236    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:00.869911    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:00.870005    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:00.880244    8683 logs.go:276] 0 containers: []
	W0731 12:32:00.880255    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:00.880316    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:00.890778    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:00.890800    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:00.890804    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:00.904490    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:00.904501    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:00.916111    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:00.916122    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:00.927301    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:00.927311    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:00.945271    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:00.945281    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:00.956791    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:00.956802    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:00.979894    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:00.979900    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:01.019553    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:01.019563    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:01.080016    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:01.080026    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:01.105078    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:01.105088    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:01.122588    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:01.122603    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:01.134698    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:01.134709    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:01.155362    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:01.155372    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:01.169914    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:01.169926    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:01.183023    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:01.183034    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:01.187345    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:01.187356    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:01.199786    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:01.199796    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:01.211767    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:01.211781    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:01.223694    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:01.223707    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:05.982388    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:05.982764    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:06.023487    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:32:06.023625    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:06.044816    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:32:06.044915    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:06.060102    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:32:06.060187    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:06.079979    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:32:06.080055    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:06.090378    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:32:06.090441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:06.101202    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:32:06.101277    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:06.111603    8672 logs.go:276] 0 containers: []
	W0731 12:32:06.111613    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:06.111669    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:06.122304    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:32:06.122321    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:32:06.122328    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:32:06.134798    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:32:06.134810    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:32:06.151035    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:32:06.151046    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:32:06.170995    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:32:06.171005    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:32:03.738131    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:06.185810    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:32:06.185822    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:32:06.197386    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:06.197396    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:06.234267    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:32:06.234277    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:32:06.253707    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:32:06.253718    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:32:06.290021    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:32:06.290030    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:32:06.302012    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:32:06.302024    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:32:06.319231    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:32:06.319245    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:32:06.330218    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:06.330229    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:06.352250    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:06.352262    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:06.387588    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:32:06.387599    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:06.406407    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:32:06.406418    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:06.418534    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:06.418545    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:06.424568    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:32:06.424580    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:32:08.938380    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:08.738738    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:08.738928    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:08.754029    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:08.754117    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:08.766027    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:08.766101    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:08.780098    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:08.780164    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:08.790512    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:08.790583    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:08.801100    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:08.801170    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:08.811623    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:08.811701    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:08.821893    8683 logs.go:276] 0 containers: []
	W0731 12:32:08.821905    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:08.821962    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:08.836506    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:08.836521    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:08.836525    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:08.854274    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:08.854288    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:08.858820    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:08.858829    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:08.882545    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:08.882556    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:08.897543    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:08.897553    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:08.915799    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:08.915809    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:08.927802    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:08.927812    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:08.941357    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:08.941365    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:08.956597    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:08.956607    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:08.978247    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:08.978257    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:09.017219    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:09.017230    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:09.030916    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:09.030926    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:09.054576    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:09.054585    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:09.066280    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:09.066290    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:09.081242    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:09.081254    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:09.092662    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:09.092675    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:09.104928    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:09.104941    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:09.139876    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:09.139886    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:09.151250    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:09.151261    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:13.940579    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:13.940862    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:13.968152    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:32:13.968289    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:13.985926    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:32:13.986018    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:13.999623    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:32:13.999694    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:14.011612    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:32:14.011683    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:14.022150    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:32:14.022214    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:14.032479    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:32:14.032557    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:14.042805    8672 logs.go:276] 0 containers: []
	W0731 12:32:14.042818    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:14.042875    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:14.053094    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:32:14.053109    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:32:14.053114    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:32:14.067554    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:14.067564    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:14.072172    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:14.072179    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:14.106860    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:32:14.106874    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:32:14.124299    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:32:14.124309    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:32:14.135779    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:14.135791    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:14.159489    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:14.159498    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:14.199076    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:32:14.199084    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:32:14.214172    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:32:14.214183    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:32:14.225341    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:32:14.225351    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:14.239001    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:32:14.239011    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:32:14.253583    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:32:14.253593    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:32:14.264331    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:32:14.264343    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:32:14.276448    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:32:14.276457    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:32:14.304110    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:32:14.304125    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:32:14.329750    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:32:14.329767    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:14.345634    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:32:14.345649    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:32:11.664995    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:16.887394    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:16.667373    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:16.667824    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:16.699731    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:16.699866    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:16.718403    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:16.718502    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:16.732646    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:16.732729    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:16.744123    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:16.744193    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:16.754778    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:16.754851    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:16.765369    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:16.765446    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:16.775749    8683 logs.go:276] 0 containers: []
	W0731 12:32:16.775760    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:16.775817    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:16.786779    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:16.786796    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:16.786801    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:16.798655    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:16.798669    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:16.816011    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:16.816027    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:16.828503    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:16.828516    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:16.840833    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:16.840845    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:16.853778    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:16.853789    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:16.891943    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:16.891952    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:16.906448    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:16.906461    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:16.918704    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:16.918715    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:16.943890    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:16.943903    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:16.955938    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:16.955949    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:16.973515    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:16.973527    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:17.013131    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:17.013138    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:17.017938    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:17.017948    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:17.042787    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:17.042799    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:17.065975    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:17.065982    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:17.078957    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:17.078972    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:17.092737    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:17.092747    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:17.104369    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:17.104381    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:19.616461    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:21.889552    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:21.889763    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:21.906227    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:32:21.906314    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:21.919100    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:32:21.919167    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:21.930021    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:32:21.930088    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:21.942256    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:32:21.942330    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:21.953860    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:32:21.953932    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:21.964120    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:32:21.964187    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:21.974366    8672 logs.go:276] 0 containers: []
	W0731 12:32:21.974375    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:21.974426    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:21.984506    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:32:21.984522    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:32:21.984528    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:32:21.995906    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:32:21.995919    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:22.007714    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:32:22.007725    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:32:22.045436    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:32:22.045450    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:32:22.059440    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:32:22.059449    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:32:22.073997    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:32:22.074011    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:32:22.086000    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:22.086011    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:22.120562    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:32:22.120573    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:22.135122    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:22.135134    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:22.140018    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:22.140027    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:22.162795    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:32:22.162816    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:32:22.176654    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:32:22.176668    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:32:22.190447    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:32:22.190460    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:32:22.208673    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:32:22.208685    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:32:22.223607    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:32:22.223622    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:32:22.236493    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:22.236504    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:22.274771    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:32:22.274779    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:32:24.790090    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:24.618818    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:24.618986    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:24.636393    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:24.636489    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:24.651875    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:24.651949    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:24.663291    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:24.663363    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:24.674616    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:24.674683    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:24.685419    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:24.685478    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:24.695957    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:24.696025    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:24.711399    8683 logs.go:276] 0 containers: []
	W0731 12:32:24.711411    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:24.711472    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:24.721770    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:24.721785    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:24.721790    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:24.733729    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:24.733740    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:24.752495    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:24.752505    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:24.770354    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:24.770365    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:24.781632    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:24.781642    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:24.793792    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:24.793801    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:24.804960    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:24.804971    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:24.846051    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:24.846074    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:24.860227    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:24.860245    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:24.872233    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:24.872246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:24.883996    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:24.884011    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:24.896280    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:24.896291    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:24.918126    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:24.918133    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:24.922481    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:24.922490    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:24.956704    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:24.956717    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:24.972585    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:24.972598    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:24.997424    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:24.997436    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:25.011726    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:25.011740    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:25.023176    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:25.023191    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:29.792227    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:29.792306    8672 kubeadm.go:597] duration metric: took 4m3.8682915s to restartPrimaryControlPlane
	W0731 12:32:29.792373    8672 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:32:29.792404    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:32:30.876316    8672 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.083916417s)
	I0731 12:32:30.876394    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:32:30.881380    8672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:32:30.884144    8672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:32:30.886849    8672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:32:30.886855    8672 kubeadm.go:157] found existing configuration files:
	
	I0731 12:32:30.886877    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf
	I0731 12:32:30.889435    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:32:30.889458    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:32:30.891861    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf
	I0731 12:32:30.894839    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:32:30.894858    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:32:30.897678    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf
	I0731 12:32:30.900241    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:32:30.900259    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:32:30.903376    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf
	I0731 12:32:30.906392    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:32:30.906416    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:32:30.909022    8672 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:32:30.927428    8672 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:32:30.927593    8672 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:32:30.979321    8672 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:32:30.979370    8672 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:32:30.979484    8672 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:32:31.030915    8672 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:32:31.035111    8672 out.go:204]   - Generating certificates and keys ...
	I0731 12:32:31.035145    8672 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:32:31.035175    8672 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:32:31.035214    8672 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:32:31.035242    8672 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:32:31.035276    8672 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:32:31.035298    8672 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:32:31.035353    8672 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:32:31.035409    8672 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:32:31.035441    8672 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:32:31.035488    8672 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:32:31.035516    8672 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:32:31.035541    8672 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:32:31.134824    8672 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:32:27.537713    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:31.197629    8672 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:32:31.543699    8672 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:32:31.595029    8672 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:32:31.628116    8672 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:32:31.628486    8672 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:32:31.628541    8672 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:32:31.701545    8672 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:32:31.704870    8672 out.go:204]   - Booting up control plane ...
	I0731 12:32:31.704913    8672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:32:31.705806    8672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:32:31.706497    8672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:32:31.706659    8672 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:32:31.707788    8672 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:32:32.539832    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:32.539947    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:32.551681    8683 logs.go:276] 2 containers: [0eae5f71990f 79af8db7b93f]
	I0731 12:32:32.551782    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:32.569454    8683 logs.go:276] 2 containers: [c12f6313d57b 48a551feeb69]
	I0731 12:32:32.569534    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:32.585072    8683 logs.go:276] 2 containers: [a7a45b369a48 6915e8ffd332]
	I0731 12:32:32.585150    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:32.596716    8683 logs.go:276] 2 containers: [2d4d994716c9 77dcff6a0e07]
	I0731 12:32:32.596782    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:32.608736    8683 logs.go:276] 2 containers: [d108f856a9b7 5f0265d3c82c]
	I0731 12:32:32.608821    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:32.620639    8683 logs.go:276] 2 containers: [04328ceebc8c ee0d0084b71f]
	I0731 12:32:32.620710    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:32.632423    8683 logs.go:276] 0 containers: []
	W0731 12:32:32.632433    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:32.632499    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:32.644297    8683 logs.go:276] 2 containers: [62704bf39963 e35e0efca313]
	I0731 12:32:32.644311    8683 logs.go:123] Gathering logs for kube-controller-manager [ee0d0084b71f] ...
	I0731 12:32:32.644316    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d0084b71f"
	I0731 12:32:32.657550    8683 logs.go:123] Gathering logs for storage-provisioner [62704bf39963] ...
	I0731 12:32:32.657561    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62704bf39963"
	I0731 12:32:32.670988    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:32.670999    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:32.695518    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:32:32.695546    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:32.708554    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:32.708568    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:32.713386    8683 logs.go:123] Gathering logs for kube-apiserver [79af8db7b93f] ...
	I0731 12:32:32.713396    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79af8db7b93f"
	I0731 12:32:32.741413    8683 logs.go:123] Gathering logs for kube-scheduler [77dcff6a0e07] ...
	I0731 12:32:32.741432    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dcff6a0e07"
	I0731 12:32:32.761540    8683 logs.go:123] Gathering logs for kube-controller-manager [04328ceebc8c] ...
	I0731 12:32:32.761557    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04328ceebc8c"
	I0731 12:32:32.781031    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:32.781047    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:32.823817    8683 logs.go:123] Gathering logs for etcd [48a551feeb69] ...
	I0731 12:32:32.823839    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a551feeb69"
	I0731 12:32:32.841720    8683 logs.go:123] Gathering logs for kube-proxy [5f0265d3c82c] ...
	I0731 12:32:32.841732    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0265d3c82c"
	I0731 12:32:32.855567    8683 logs.go:123] Gathering logs for storage-provisioner [e35e0efca313] ...
	I0731 12:32:32.855579    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e35e0efca313"
	I0731 12:32:32.867982    8683 logs.go:123] Gathering logs for etcd [c12f6313d57b] ...
	I0731 12:32:32.867995    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12f6313d57b"
	I0731 12:32:32.882377    8683 logs.go:123] Gathering logs for kube-scheduler [2d4d994716c9] ...
	I0731 12:32:32.882388    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d4d994716c9"
	I0731 12:32:32.894487    8683 logs.go:123] Gathering logs for kube-proxy [d108f856a9b7] ...
	I0731 12:32:32.894498    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d108f856a9b7"
	I0731 12:32:32.907588    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:32.907603    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:32.947184    8683 logs.go:123] Gathering logs for kube-apiserver [0eae5f71990f] ...
	I0731 12:32:32.947196    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eae5f71990f"
	I0731 12:32:32.962796    8683 logs.go:123] Gathering logs for coredns [a7a45b369a48] ...
	I0731 12:32:32.962809    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a45b369a48"
	I0731 12:32:32.974972    8683 logs.go:123] Gathering logs for coredns [6915e8ffd332] ...
	I0731 12:32:32.974986    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6915e8ffd332"
	I0731 12:32:35.489306    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:36.209654    8672 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501861 seconds
	I0731 12:32:36.209754    8672 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:32:36.213792    8672 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:32:36.729904    8672 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:32:36.730118    8672 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-443000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:32:37.234868    8672 kubeadm.go:310] [bootstrap-token] Using token: 6dq04j.kb1wbzf2t3iztkgl
	I0731 12:32:37.238411    8672 out.go:204]   - Configuring RBAC rules ...
	I0731 12:32:37.238475    8672 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:32:37.238527    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:32:37.242274    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:32:37.243243    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:32:37.244158    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:32:37.244957    8672 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:32:37.248222    8672 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:32:37.420853    8672 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:32:37.639778    8672 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:32:37.640169    8672 kubeadm.go:310] 
	I0731 12:32:37.640201    8672 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:32:37.640206    8672 kubeadm.go:310] 
	I0731 12:32:37.640238    8672 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:32:37.640243    8672 kubeadm.go:310] 
	I0731 12:32:37.640259    8672 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:32:37.640290    8672 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:32:37.640314    8672 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:32:37.640320    8672 kubeadm.go:310] 
	I0731 12:32:37.640345    8672 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:32:37.640347    8672 kubeadm.go:310] 
	I0731 12:32:37.640372    8672 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:32:37.640374    8672 kubeadm.go:310] 
	I0731 12:32:37.640399    8672 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:32:37.640436    8672 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:32:37.640497    8672 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:32:37.640500    8672 kubeadm.go:310] 
	I0731 12:32:37.640551    8672 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:32:37.640594    8672 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:32:37.640598    8672 kubeadm.go:310] 
	I0731 12:32:37.640649    8672 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dq04j.kb1wbzf2t3iztkgl \
	I0731 12:32:37.640704    8672 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f \
	I0731 12:32:37.640714    8672 kubeadm.go:310] 	--control-plane 
	I0731 12:32:37.640718    8672 kubeadm.go:310] 
	I0731 12:32:37.640757    8672 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:32:37.640759    8672 kubeadm.go:310] 
	I0731 12:32:37.640801    8672 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dq04j.kb1wbzf2t3iztkgl \
	I0731 12:32:37.640850    8672 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f 
	I0731 12:32:37.640995    8672 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:32:37.641115    8672 cni.go:84] Creating CNI manager for ""
	I0731 12:32:37.641127    8672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:32:37.644282    8672 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:32:37.651257    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:32:37.654215    8672 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:32:37.658880    8672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:32:37.658928    8672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:32:37.658959    8672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-443000 minikube.k8s.io/updated_at=2024_07_31T12_32_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=stopped-upgrade-443000 minikube.k8s.io/primary=true
	I0731 12:32:37.709902    8672 kubeadm.go:1113] duration metric: took 51.00825ms to wait for elevateKubeSystemPrivileges
	I0731 12:32:37.709949    8672 ops.go:34] apiserver oom_adj: -16
	I0731 12:32:37.710017    8672 kubeadm.go:394] duration metric: took 4m11.801928583s to StartCluster
	I0731 12:32:37.710028    8672 settings.go:142] acquiring lock: {Name:mk262cff1bf9355aa6c0530bb5de14a2847090f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:37.710184    8672 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:32:37.710552    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:37.710784    8672 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:32:37.710856    8672 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:32:37.710832    8672 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:32:37.710919    8672 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-443000"
	I0731 12:32:37.710923    8672 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-443000"
	I0731 12:32:37.710931    8672 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-443000"
	I0731 12:32:37.710933    8672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-443000"
	W0731 12:32:37.710935    8672 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:32:37.710945    8672 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0731 12:32:37.712104    8672 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:32:37.712237    8672 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-443000"
	W0731 12:32:37.712243    8672 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:32:37.712251    8672 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0731 12:32:37.715167    8672 out.go:177] * Verifying Kubernetes components...
	I0731 12:32:37.715606    8672 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:37.719367    8672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:32:37.719374    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:32:37.725199    8672 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:32:37.729183    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:32:37.735173    8672 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:37.735181    8672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:32:37.735187    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:32:37.804285    8672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:32:37.809189    8672 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:32:37.809235    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:32:37.813116    8672 api_server.go:72] duration metric: took 102.321875ms to wait for apiserver process to appear ...
	I0731 12:32:37.813123    8672 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:32:37.813130    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:37.822008    8672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:37.859175    8672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:40.491624    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:40.491684    8683 kubeadm.go:597] duration metric: took 4m7.416969666s to restartPrimaryControlPlane
	W0731 12:32:40.491730    8683 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:32:40.491748    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:32:41.561905    8683 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.070150083s)
	I0731 12:32:41.561978    8683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:32:41.567212    8683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:32:41.570095    8683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:32:41.572966    8683 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:32:41.572972    8683 kubeadm.go:157] found existing configuration files:
	
	I0731 12:32:41.572993    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf
	I0731 12:32:41.575693    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:32:41.575720    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:32:41.578136    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf
	I0731 12:32:41.580803    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:32:41.580820    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:32:41.583762    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf
	I0731 12:32:41.586234    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:32:41.586256    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:32:41.589301    8683 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf
	I0731 12:32:41.592411    8683 kubeadm.go:163] "https://control-plane.minikube.internal:51322" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51322 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:32:41.592433    8683 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:32:41.595154    8683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:32:41.612559    8683 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:32:41.612585    8683 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:32:41.659756    8683 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:32:41.659809    8683 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:32:41.659861    8683 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:32:41.710931    8683 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:32:41.715117    8683 out.go:204]   - Generating certificates and keys ...
	I0731 12:32:41.715155    8683 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:32:41.715203    8683 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:32:41.715287    8683 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:32:41.715373    8683 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:32:41.715467    8683 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:32:41.715556    8683 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:32:41.715642    8683 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:32:41.715678    8683 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:32:41.715738    8683 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:32:41.715802    8683 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:32:41.715823    8683 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:32:41.715861    8683 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:32:41.856521    8683 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:32:41.904704    8683 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:32:42.292508    8683 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:32:42.519789    8683 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:32:42.551053    8683 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:32:42.551404    8683 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:32:42.551479    8683 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:32:42.642655    8683 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:32:42.815206    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:42.815232    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:42.647192    8683 out.go:204]   - Booting up control plane ...
	I0731 12:32:42.647239    8683 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:32:42.647275    8683 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:32:42.647315    8683 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:32:42.647359    8683 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:32:42.647596    8683 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:32:47.145013    8683 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502491 seconds
	I0731 12:32:47.145074    8683 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:32:47.148628    8683 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:32:47.657415    8683 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:32:47.657586    8683 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-568000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:32:48.161561    8683 kubeadm.go:310] [bootstrap-token] Using token: q9milu.92yi4hukjtyyvv5w
	I0731 12:32:48.167530    8683 out.go:204]   - Configuring RBAC rules ...
	I0731 12:32:48.167599    8683 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:32:48.167643    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:32:48.171991    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:32:48.172867    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:32:48.173765    8683 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:32:48.174499    8683 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:32:48.177809    8683 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:32:48.329841    8683 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:32:48.567846    8683 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:32:48.568401    8683 kubeadm.go:310] 
	I0731 12:32:48.568432    8683 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:32:48.568436    8683 kubeadm.go:310] 
	I0731 12:32:48.568473    8683 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:32:48.568479    8683 kubeadm.go:310] 
	I0731 12:32:48.568491    8683 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:32:48.568522    8683 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:32:48.568550    8683 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:32:48.568553    8683 kubeadm.go:310] 
	I0731 12:32:48.568579    8683 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:32:48.568582    8683 kubeadm.go:310] 
	I0731 12:32:48.568606    8683 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:32:48.568611    8683 kubeadm.go:310] 
	I0731 12:32:48.568643    8683 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:32:48.568683    8683 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:32:48.568730    8683 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:32:48.568735    8683 kubeadm.go:310] 
	I0731 12:32:48.568780    8683 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:32:48.568815    8683 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:32:48.568818    8683 kubeadm.go:310] 
	I0731 12:32:48.568863    8683 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q9milu.92yi4hukjtyyvv5w \
	I0731 12:32:48.568918    8683 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f \
	I0731 12:32:48.568930    8683 kubeadm.go:310] 	--control-plane 
	I0731 12:32:48.568932    8683 kubeadm.go:310] 
	I0731 12:32:48.568973    8683 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:32:48.568978    8683 kubeadm.go:310] 
	I0731 12:32:48.569018    8683 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q9milu.92yi4hukjtyyvv5w \
	I0731 12:32:48.569072    8683 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f 
	I0731 12:32:48.569128    8683 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:32:48.569173    8683 cni.go:84] Creating CNI manager for ""
	I0731 12:32:48.569181    8683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:32:48.573643    8683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:32:48.580668    8683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:32:48.583959    8683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:32:48.590496    8683 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:32:48.590586    8683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-568000 minikube.k8s.io/updated_at=2024_07_31T12_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=running-upgrade-568000 minikube.k8s.io/primary=true
	I0731 12:32:48.590658    8683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:32:48.632927    8683 ops.go:34] apiserver oom_adj: -16
	I0731 12:32:48.632947    8683 kubeadm.go:1113] duration metric: took 42.383625ms to wait for elevateKubeSystemPrivileges
	I0731 12:32:48.633042    8683 kubeadm.go:394] duration metric: took 4m15.572514958s to StartCluster
	I0731 12:32:48.633054    8683 settings.go:142] acquiring lock: {Name:mk262cff1bf9355aa6c0530bb5de14a2847090f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:48.633131    8683 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:32:48.633515    8683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:48.633698    8683 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:32:48.633829    8683 config.go:182] Loaded profile config "running-upgrade-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:32:48.633819    8683 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:32:48.633885    8683 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-568000"
	I0731 12:32:48.633895    8683 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-568000"
	W0731 12:32:48.633899    8683 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:32:48.633901    8683 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-568000"
	I0731 12:32:48.633911    8683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-568000"
	I0731 12:32:48.633916    8683 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0731 12:32:48.634883    8683 kapi.go:59] client config for running-upgrade-568000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/running-upgrade-568000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b981b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:32:48.635007    8683 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-568000"
	W0731 12:32:48.635012    8683 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:32:48.635019    8683 host.go:66] Checking if "running-upgrade-568000" exists ...
	I0731 12:32:48.637618    8683 out.go:177] * Verifying Kubernetes components...
	I0731 12:32:48.637991    8683 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:48.640856    8683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:32:48.640865    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:32:48.644504    8683 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:32:47.815352    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:47.815386    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:48.648615    8683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:32:48.652496    8683 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:48.652503    8683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:32:48.652509    8683 sshutil.go:53] new ssh client: &{IP:localhost Port:51250 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/running-upgrade-568000/id_rsa Username:docker}
	I0731 12:32:48.728337    8683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:32:48.733427    8683 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:32:48.733465    8683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:32:48.737501    8683 api_server.go:72] duration metric: took 103.793042ms to wait for apiserver process to appear ...
	I0731 12:32:48.737508    8683 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:32:48.737514    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:48.760499    8683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:48.784150    8683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:52.815572    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:52.815593    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:53.738741    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:53.738778    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:57.815898    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:57.815949    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:58.739473    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:58.739542    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:02.816504    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:02.816524    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:03.739785    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:03.739814    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:07.817113    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:07.817156    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:33:08.201794    8672 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:33:08.206079    8672 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:33:08.220016    8672 addons.go:510] duration metric: took 30.509701666s for enable addons: enabled=[storage-provisioner]
	I0731 12:33:08.740095    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:08.740155    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:12.818055    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:12.818104    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:13.740694    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:13.740715    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:18.741238    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:18.741271    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:33:19.106146    8683 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:33:19.109743    8683 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:33:17.819174    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:17.819216    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:19.117603    8683 addons.go:510] duration metric: took 30.484320417s for enable addons: enabled=[storage-provisioner]
	I0731 12:33:22.820658    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:22.820701    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:23.741954    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:23.741987    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:27.822450    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:27.822470    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:28.742988    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:28.743021    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:32.824580    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:32.824622    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:33.744256    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:33.744300    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:37.826733    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:37.826838    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:37.840958    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:33:37.841028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:37.851840    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:33:37.851914    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:37.862761    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:33:37.862834    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:37.873109    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:33:37.873178    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:37.883168    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:33:37.883237    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:37.893949    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:33:37.894017    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:37.904648    8672 logs.go:276] 0 containers: []
	W0731 12:33:37.904661    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:37.904723    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:37.915186    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:33:37.915203    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:37.915211    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:37.919457    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:37.919467    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:37.956863    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:33:37.956874    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:33:37.971610    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:33:37.971625    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:33:37.985392    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:33:37.985402    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:33:38.000552    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:33:38.000563    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:33:38.020815    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:33:38.020829    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:33:38.032890    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:38.032901    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:38.069639    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:33:38.069650    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:38.081065    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:38.081077    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:38.106638    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:33:38.106652    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:33:38.121712    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:33:38.121723    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:33:38.133320    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:33:38.133334    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:33:40.647075    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:38.745926    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:38.745984    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:45.649246    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:45.649367    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:45.662630    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:33:45.662710    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:45.673728    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:33:45.673799    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:45.684741    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:33:45.684811    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:45.694976    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:33:45.695045    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:45.705361    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:33:45.705427    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:45.715584    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:33:45.715653    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:45.725870    8672 logs.go:276] 0 containers: []
	W0731 12:33:45.725882    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:45.725942    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:45.736600    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:33:45.736613    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:33:45.736619    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:45.754143    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:33:45.754154    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:33:45.766396    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:33:45.766412    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:33:45.780913    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:33:45.780923    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:33:45.792993    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:33:45.793004    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:33:45.810914    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:33:45.810923    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:33:45.823140    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:33:45.823150    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:33:45.835200    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:45.835209    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:45.859808    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:45.859818    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:45.893273    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:45.893283    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:45.898185    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:45.898196    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:45.933575    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:33:45.933586    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:33:45.947959    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:33:45.947974    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:33:43.748007    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:43.748051    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:48.463628    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:48.750312    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:48.750479    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:48.761769    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:33:48.761841    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:48.772253    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:33:48.772320    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:48.782944    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:33:48.783009    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:48.793420    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:33:48.793488    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:48.804214    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:33:48.804287    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:48.817149    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:33:48.817225    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:48.827150    8683 logs.go:276] 0 containers: []
	W0731 12:33:48.827162    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:48.827230    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:48.838064    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:33:48.838081    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:48.838086    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:48.862958    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:48.862972    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:48.867691    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:33:48.867697    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:33:48.881234    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:33:48.881248    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:33:48.897906    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:33:48.897920    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:33:48.909671    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:33:48.909682    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:33:48.921742    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:33:48.921753    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:33:48.939874    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:33:48.939885    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:33:48.951752    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:33:48.951764    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:48.963825    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:48.963836    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:49.001127    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:49.001136    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:49.038534    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:33:49.038545    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:33:49.054095    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:33:49.054106    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:33:53.465802    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:53.466048    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:53.490028    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:33:53.490159    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:53.510850    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:33:53.510929    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:53.523726    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:33:53.523802    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:53.534647    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:33:53.534711    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:53.544963    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:33:53.545028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:53.555318    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:33:53.555376    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:53.564919    8672 logs.go:276] 0 containers: []
	W0731 12:33:53.564931    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:53.564986    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:53.575278    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:33:53.575297    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:53.575303    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:53.579673    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:33:53.579682    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:33:53.599219    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:33:53.599230    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:33:53.610566    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:33:53.610577    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:53.621520    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:53.621530    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:53.654965    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:53.654973    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:53.689513    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:33:53.689524    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:33:53.703234    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:33:53.703244    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:33:53.719514    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:33:53.719524    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:33:53.734297    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:33:53.734307    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:33:53.745944    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:33:53.745954    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:33:53.766444    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:33:53.766454    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:33:53.778360    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:53.778372    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:51.573677    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:56.305318    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:56.575856    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:56.575982    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:56.587167    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:33:56.587252    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:56.598402    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:33:56.598475    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:56.609257    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:33:56.609327    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:56.620047    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:33:56.620120    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:56.630269    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:33:56.630340    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:56.640739    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:33:56.640805    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:56.651164    8683 logs.go:276] 0 containers: []
	W0731 12:33:56.651174    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:56.651234    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:56.661706    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:33:56.661721    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:33:56.661727    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:33:56.675594    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:33:56.675603    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:33:56.693717    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:33:56.693728    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:33:56.711200    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:33:56.711212    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:33:56.722883    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:33:56.722897    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:56.734421    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:56.734435    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:56.757811    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:56.757819    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:56.794989    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:56.794997    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:56.799542    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:56.799552    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:56.835698    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:33:56.835710    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:33:56.847894    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:33:56.847908    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:33:56.860008    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:33:56.860019    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:33:56.875205    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:33:56.875217    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:33:59.389669    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:01.307492    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:01.307678    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:01.322315    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:01.322400    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:01.334859    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:01.334929    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:01.345561    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:01.345634    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:01.356509    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:01.356579    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:01.367000    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:01.367074    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:01.376963    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:01.377033    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:01.386753    8672 logs.go:276] 0 containers: []
	W0731 12:34:01.386765    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:01.386827    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:01.398051    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:01.398068    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:01.398074    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:01.402426    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:01.402433    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:01.440725    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:01.440738    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:01.454913    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:01.454925    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:01.466458    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:01.466467    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:01.481670    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:01.481684    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:01.493862    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:01.493873    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:01.519326    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:01.519342    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:01.554724    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:01.554735    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:01.567369    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:01.567386    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:01.579028    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:01.579040    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:01.600374    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:01.600384    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:01.612391    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:01.612403    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:04.127507    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:04.391624    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:04.391866    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:04.419441    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:04.419540    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:04.443833    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:04.443911    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:04.456426    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:04.456493    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:04.466943    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:04.467015    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:04.477488    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:04.477563    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:04.488270    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:04.488336    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:04.499066    8683 logs.go:276] 0 containers: []
	W0731 12:34:04.499076    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:04.499128    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:04.509639    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:04.509655    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:04.509660    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:04.528292    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:04.528303    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:04.542050    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:04.542061    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:04.553766    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:04.553777    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:04.574508    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:04.574517    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:04.586010    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:04.586020    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:04.609560    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:04.609568    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:04.647398    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:04.647410    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:04.651970    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:04.651976    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:04.685672    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:04.685688    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:04.697755    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:04.697765    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:04.719771    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:04.719782    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:04.734039    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:04.734051    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:09.128743    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:09.128933    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:09.145773    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:09.145864    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:09.160867    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:09.160936    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:09.173261    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:09.173332    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:09.183199    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:09.183262    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:09.193753    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:09.193822    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:09.212810    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:09.212880    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:09.223057    8672 logs.go:276] 0 containers: []
	W0731 12:34:09.223067    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:09.223125    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:09.233317    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:09.233330    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:09.233334    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:09.238102    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:09.238110    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:09.274125    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:09.274140    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:09.287846    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:09.287856    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:09.302432    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:09.302442    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:09.314188    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:09.314198    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:09.332605    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:09.332615    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:09.343900    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:09.343913    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:09.377756    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:09.377767    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:09.395922    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:09.395932    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:09.407591    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:09.407602    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:09.420005    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:09.420014    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:09.443217    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:09.443224    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:07.247281    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:11.956406    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:12.249808    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:12.249957    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:12.263462    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:12.263546    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:12.275078    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:12.275149    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:12.285208    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:12.285276    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:12.295674    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:12.295740    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:12.306176    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:12.306246    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:12.316963    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:12.317030    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:12.327507    8683 logs.go:276] 0 containers: []
	W0731 12:34:12.327519    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:12.327578    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:12.337817    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:12.337834    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:12.337840    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:12.349707    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:12.349718    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:12.361423    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:12.361434    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:12.375680    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:12.375690    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:12.386853    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:12.386865    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:12.425700    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:12.425714    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:12.430379    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:12.430385    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:12.471557    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:12.471566    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:12.485894    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:12.485903    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:12.499906    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:12.499919    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:12.515011    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:12.515025    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:12.534906    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:12.534915    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:12.557765    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:12.557772    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:15.070578    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:16.958647    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:16.958863    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:16.978827    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:16.978909    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:16.995905    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:16.995985    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:17.007245    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:17.007313    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:17.020895    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:17.020967    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:17.031624    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:17.031699    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:17.043074    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:17.043152    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:17.053048    8672 logs.go:276] 0 containers: []
	W0731 12:34:17.053063    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:17.053119    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:17.063708    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:17.063723    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:17.063728    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:17.075548    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:17.075562    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:17.087014    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:17.087024    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:17.122649    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:17.122658    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:17.157559    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:17.157571    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:17.171867    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:17.171877    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:17.183339    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:17.183351    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:17.201487    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:17.201501    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:17.226277    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:17.226284    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:17.237482    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:17.237491    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:17.242188    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:17.242194    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:17.260856    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:17.260867    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:17.275549    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:17.275559    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:19.789324    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:20.072746    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:20.072965    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:20.095791    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:20.095885    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:20.109106    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:20.109167    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:20.127133    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:20.127191    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:20.139255    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:20.139319    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:20.149790    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:20.149847    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:20.160037    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:20.160102    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:20.170405    8683 logs.go:276] 0 containers: []
	W0731 12:34:20.170416    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:20.170466    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:20.180685    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:20.180700    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:20.180707    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:20.193484    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:20.193494    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:20.229896    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:20.229906    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:20.234558    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:20.234568    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:20.268235    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:20.268246    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:20.280423    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:20.280435    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:20.292016    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:20.292030    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:20.303868    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:20.303879    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:20.329028    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:20.329036    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:20.343679    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:20.343692    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:20.357357    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:20.357365    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:20.372624    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:20.372634    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:20.384012    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:20.384024    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:24.791876    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:24.792131    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:24.819835    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:24.819971    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:24.838015    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:24.838095    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:24.851756    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:24.851832    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:24.863336    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:24.863401    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:24.873903    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:24.873969    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:24.891298    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:24.891370    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:24.903870    8672 logs.go:276] 0 containers: []
	W0731 12:34:24.903883    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:24.903958    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:24.916372    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:24.916390    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:24.916395    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:24.930801    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:24.930811    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:24.944221    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:24.944231    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:24.956313    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:24.956323    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:24.970879    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:24.970889    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:24.992378    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:24.992395    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:25.018312    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:25.018326    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:25.053666    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:25.053679    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:25.067838    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:25.067849    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:25.080445    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:25.080455    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:25.092106    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:25.092116    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:25.106266    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:25.106278    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:25.110579    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:25.110585    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:22.903220    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:27.650629    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:29.379455    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:29.379703    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:29.404946    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:29.405074    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:29.421499    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:29.421579    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:29.435118    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:29.435202    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:29.446620    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:29.446688    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:29.456663    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:29.456728    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:29.467180    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:29.467254    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:29.477420    8683 logs.go:276] 0 containers: []
	W0731 12:34:29.477435    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:29.477496    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:29.489022    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:29.489038    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:29.489044    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:29.501093    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:29.501103    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:29.539993    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:29.540008    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:29.554477    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:29.554487    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:29.568141    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:29.568150    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:29.579577    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:29.579589    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:29.594567    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:29.594578    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:29.612403    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:29.612414    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:29.624339    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:29.624349    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:29.629543    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:29.629550    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:29.663567    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:29.663577    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:29.674944    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:29.674955    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:29.686424    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:29.686435    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:32.652908    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:32.653233    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:32.779002    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:32.779097    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:32.793637    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:32.793713    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:32.805757    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:32.805826    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:32.843090    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:32.843161    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:32.855770    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:32.855832    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:32.867291    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:32.867364    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:32.878958    8672 logs.go:276] 0 containers: []
	W0731 12:34:32.878971    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:32.879028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:32.892291    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:32.892307    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:32.892312    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:32.928594    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:32.928603    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:32.967280    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:32.967290    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:32.985207    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:32.985218    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:32.997801    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:32.997813    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:33.010007    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:33.010020    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:33.035472    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:33.035482    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:33.047143    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:33.047154    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:33.051084    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:33.051090    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:33.065576    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:33.065586    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:33.077227    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:33.077236    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:33.092415    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:33.092426    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:33.104847    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:33.104857    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:36.169042    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:32.211434    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:41.171288    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:41.171472    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:37.212238    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:37.212339    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:37.226035    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:37.226109    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:37.237120    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:37.237190    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:37.247633    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:37.247703    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:37.266729    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:37.266802    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:37.278198    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:37.278275    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:37.288923    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:37.288993    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:37.299093    8683 logs.go:276] 0 containers: []
	W0731 12:34:37.299103    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:37.299162    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:37.309537    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:37.309554    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:37.309559    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:37.314108    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:37.314114    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:37.328576    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:37.328585    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:37.352029    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:37.352036    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:37.366849    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:37.366860    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:37.380858    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:37.380868    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:37.398414    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:37.398425    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:37.436326    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:37.436334    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:37.473365    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:37.473380    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:37.487676    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:37.487686    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:37.500509    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:37.500521    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:37.513381    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:37.513392    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:37.525129    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:37.525139    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:40.040758    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:41.193808    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:41.193913    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:41.209955    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:41.210041    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:41.222421    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:41.222482    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:41.233378    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:41.233448    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:41.244789    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:41.244864    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:41.255770    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:41.255836    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:41.266691    8672 logs.go:276] 0 containers: []
	W0731 12:34:41.266702    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:41.266761    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:41.277415    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:41.277428    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:41.277434    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:41.311165    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:41.311174    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:41.325679    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:41.325690    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:41.338233    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:41.338243    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:41.350070    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:41.350081    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:41.365202    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:41.365213    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:41.377085    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:41.377095    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:41.400756    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:41.400765    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:41.404874    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:41.404881    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:41.440985    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:41.440995    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:41.460219    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:41.460229    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:41.472121    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:41.472133    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:41.493373    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:41.493384    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:44.009274    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:45.042987    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:45.043160    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:45.061010    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:45.061097    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:45.074607    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:45.074677    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:45.086138    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:45.086206    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:45.097262    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:45.097331    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:45.108058    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:45.108119    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:45.119145    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:45.119205    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:45.129491    8683 logs.go:276] 0 containers: []
	W0731 12:34:45.129502    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:45.129556    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:45.140584    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:45.140601    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:45.140607    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:45.155966    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:45.155976    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:45.173644    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:45.173654    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:45.199040    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:45.199050    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:45.234986    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:45.234995    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:45.247776    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:45.247787    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:45.262314    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:45.262324    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:45.276049    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:45.276059    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:45.289817    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:45.289830    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:45.302226    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:45.302235    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:45.317792    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:45.317804    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:45.329707    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:45.329716    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:45.334098    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:45.334105    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:49.010719    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:49.011134    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:49.048836    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:49.048967    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:49.072660    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:49.072757    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:49.086692    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:34:49.086772    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:49.100841    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:49.100906    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:49.115364    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:49.115440    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:49.126161    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:49.126232    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:49.136880    8672 logs.go:276] 0 containers: []
	W0731 12:34:49.136893    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:49.136952    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:49.148071    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:49.148088    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:49.148093    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:49.163099    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:49.163110    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:49.174960    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:49.174971    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:49.200089    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:49.200100    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:49.204903    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:34:49.204910    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:34:49.216523    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:49.216535    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:49.228857    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:49.228867    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:49.247593    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:49.247604    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:49.283560    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:49.283572    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:49.296163    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:49.296174    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:49.312109    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:49.312121    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:49.346855    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:49.346863    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:49.361419    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:34:49.361430    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:34:49.372774    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:49.372790    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:49.384864    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:49.384876    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:47.871335    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:51.899726    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:52.873547    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:52.873723    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:52.892636    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:34:52.892739    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:52.906906    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:34:52.906978    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:52.918804    8683 logs.go:276] 2 containers: [dbcb1acc77fa 8152fa50c3e3]
	I0731 12:34:52.918865    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:52.929433    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:34:52.929506    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:52.940353    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:34:52.940427    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:52.951360    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:34:52.951425    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:52.961747    8683 logs.go:276] 0 containers: []
	W0731 12:34:52.961758    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:52.961823    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:52.972505    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:34:52.972521    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:34:52.972526    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:34:52.986395    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:34:52.986405    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:34:53.001871    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:34:53.001882    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:34:53.013996    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:34:53.014007    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:34:53.032268    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:34:53.032279    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:53.044299    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:53.044312    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:53.049341    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:53.049348    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:53.086117    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:34:53.086129    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:34:53.097785    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:34:53.097797    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:34:53.109651    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:34:53.109664    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:34:53.121411    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:53.121421    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:53.146154    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:53.146164    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:53.183603    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:34:53.183616    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:34:55.700359    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:56.900987    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:56.901232    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:56.930507    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:56.930620    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:56.953352    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:56.953428    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:56.966556    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:34:56.966626    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:56.977753    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:56.977818    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:56.988970    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:56.989034    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:57.000132    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:57.000193    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:57.014585    8672 logs.go:276] 0 containers: []
	W0731 12:34:57.014596    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:57.014657    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:57.026084    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:57.026103    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:57.026116    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:57.030307    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:34:57.030317    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:34:57.042575    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:57.042588    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:57.060997    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:57.061011    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:57.076612    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:57.076624    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:57.101909    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:57.101916    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:57.136107    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:57.136113    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:57.173555    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:57.173565    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:57.188547    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:57.188559    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:57.203772    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:57.203782    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:57.220121    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:57.220133    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:57.232155    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:34:57.232164    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:34:57.244238    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:57.244250    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:57.256158    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:57.256172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:57.268501    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:57.268512    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:59.783290    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:00.702485    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:00.702648    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:00.734469    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:00.734536    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:00.746378    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:00.746456    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:00.757191    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:00.757257    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:00.768671    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:00.768733    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:00.779651    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:00.779716    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:00.790495    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:00.790569    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:00.801106    8683 logs.go:276] 0 containers: []
	W0731 12:35:00.801122    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:00.801181    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:00.817821    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:00.817840    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:00.817846    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:00.829605    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:00.829616    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:00.845724    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:00.845735    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:00.857337    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:00.857350    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:00.873978    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:00.873987    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:00.885362    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:00.885372    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:00.908890    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:00.908899    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:00.923475    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:00.923484    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:00.941357    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:00.941368    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:00.954861    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:00.954874    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:00.969791    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:00.969801    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:00.981867    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:00.981877    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:01.021471    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:01.021483    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:01.035741    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:01.035750    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:01.071577    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:01.071585    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:04.785435    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:04.785688    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:04.806163    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:04.806266    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:04.821513    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:04.821592    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:04.835542    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:04.835625    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:04.846284    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:04.846370    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:04.856795    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:04.856864    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:04.866761    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:04.866824    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:04.876417    8672 logs.go:276] 0 containers: []
	W0731 12:35:04.876429    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:04.876491    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:04.886915    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:04.886931    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:04.886936    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:04.898902    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:04.898915    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:04.910895    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:04.910910    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:04.948769    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:04.948782    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:04.962470    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:04.962484    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:04.974300    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:04.974313    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:04.989057    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:04.989068    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:05.025412    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:05.025422    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:05.039733    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:05.039741    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:05.052388    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:05.052399    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:05.063857    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:05.063868    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:05.080492    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:05.080501    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:05.105728    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:05.105735    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:05.117468    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:05.117483    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:05.121795    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:05.121802    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:03.578508    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:07.643088    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:08.580785    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:08.580960    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:08.600934    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:08.601017    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:08.625831    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:08.625910    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:08.636501    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:08.636574    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:08.650214    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:08.650283    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:08.660649    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:08.660726    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:08.672532    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:08.672603    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:08.682659    8683 logs.go:276] 0 containers: []
	W0731 12:35:08.682670    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:08.682733    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:08.693421    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:08.693440    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:08.693444    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:08.705507    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:08.705517    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:08.720549    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:08.720559    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:08.738009    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:08.738021    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:08.742702    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:08.742708    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:08.778831    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:08.778843    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:08.793066    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:08.793077    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:08.805922    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:08.805934    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:08.842002    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:08.842010    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:08.853137    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:08.853148    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:08.864373    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:08.864384    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:08.878803    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:08.878815    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:08.890738    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:08.890751    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:08.903009    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:08.903018    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:08.914577    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:08.914591    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:12.643652    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:12.644075    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:12.681260    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:12.681400    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:12.703931    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:12.704025    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:12.719089    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:12.719170    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:12.732968    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:12.733032    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:12.743943    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:12.744021    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:12.755549    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:12.755623    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:12.766453    8672 logs.go:276] 0 containers: []
	W0731 12:35:12.766463    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:12.766525    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:12.777717    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:12.777734    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:12.777740    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:12.813599    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:12.813611    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:12.826758    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:12.826772    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:12.838420    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:12.838430    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:12.865856    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:12.865867    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:12.878285    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:12.878303    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:12.913661    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:12.913673    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:12.925350    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:12.925363    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:12.939507    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:12.939518    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:12.951685    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:12.951696    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:12.963267    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:12.963282    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:12.988212    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:12.988224    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:12.993196    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:12.993203    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:13.007340    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:13.007351    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:13.020919    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:13.020934    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:15.538841    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:11.441460    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:20.541060    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:20.541312    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:20.555202    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:20.555293    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:20.566434    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:20.566501    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:20.577706    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:20.577776    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:20.588477    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:20.588545    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:20.598974    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:20.599044    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:20.610152    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:20.610227    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:20.620012    8672 logs.go:276] 0 containers: []
	W0731 12:35:20.620023    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:20.620082    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:20.645892    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:20.645913    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:20.645919    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:20.659525    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:20.659537    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:20.671574    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:20.671587    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:20.683775    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:20.683785    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:20.695576    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:20.695590    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:20.700567    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:20.700576    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:20.718253    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:20.718264    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:20.735044    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:20.735058    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:20.753315    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:20.753326    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:20.788878    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:20.788888    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:20.830242    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:20.830256    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:20.842188    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:20.842199    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:20.855379    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:20.855393    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:20.872279    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:20.872290    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:20.888729    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:20.888740    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:16.443799    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:16.443939    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:16.469802    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:16.469875    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:16.480943    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:16.481022    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:16.493029    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:16.493104    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:16.504778    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:16.504858    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:16.515774    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:16.515865    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:16.526534    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:16.526599    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:16.537766    8683 logs.go:276] 0 containers: []
	W0731 12:35:16.537775    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:16.537835    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:16.548986    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:16.549002    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:16.549009    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:16.573207    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:16.573216    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:16.610528    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:16.610535    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:16.624516    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:16.624525    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:16.636134    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:16.636149    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:16.651698    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:16.651708    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:16.663730    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:16.663739    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:16.683682    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:16.683691    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:16.688830    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:16.688837    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:16.703233    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:16.703243    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:16.714854    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:16.714865    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:16.727391    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:16.727404    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:16.739404    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:16.739418    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:16.754181    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:16.754194    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:16.789930    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:16.789941    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:19.304371    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:23.413953    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:24.306803    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:24.307207    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:24.342670    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:24.342790    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:24.362028    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:24.362110    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:24.377279    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:24.377352    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:24.389726    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:24.389796    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:24.400854    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:24.400911    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:24.412203    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:24.412266    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:24.423773    8683 logs.go:276] 0 containers: []
	W0731 12:35:24.423787    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:24.423848    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:24.434922    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:24.434939    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:24.434944    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:24.447653    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:24.447664    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:24.473408    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:24.473418    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:24.485427    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:24.485440    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:24.500985    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:24.500999    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:24.516468    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:24.516480    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:24.535097    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:24.535108    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:24.540219    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:24.540226    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:24.554057    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:24.554071    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:24.568364    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:24.568376    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:24.579909    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:24.579921    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:24.593110    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:24.593122    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:24.606726    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:24.606739    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:24.618919    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:24.618932    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:24.657492    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:24.657500    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:28.416475    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:28.416796    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:28.444407    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:28.444529    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:28.462136    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:28.462226    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:28.479699    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:28.479776    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:28.490902    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:28.490963    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:28.503192    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:28.503254    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:28.513739    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:28.513811    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:28.523616    8672 logs.go:276] 0 containers: []
	W0731 12:35:28.523625    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:28.523675    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:28.534142    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:28.534159    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:28.534163    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:28.571226    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:28.571241    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:28.583492    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:28.583503    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:28.595115    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:28.595124    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:28.631670    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:28.631682    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:28.643352    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:28.643364    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:28.655252    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:28.655264    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:28.671441    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:28.671453    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:28.695676    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:28.695686    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:28.709899    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:28.709910    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:28.721297    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:28.721308    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:28.733153    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:28.733164    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:28.748369    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:28.748382    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:28.752645    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:28.752651    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:28.786463    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:28.786476    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:27.193395    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:31.306436    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:32.194063    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:32.194261    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:32.216205    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:32.216315    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:32.231668    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:32.231751    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:32.244418    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:32.244495    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:32.255100    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:32.255171    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:32.265835    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:32.265906    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:32.276406    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:32.276477    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:32.286709    8683 logs.go:276] 0 containers: []
	W0731 12:35:32.286719    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:32.286776    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:32.296801    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:32.296817    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:32.296821    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:32.336487    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:32.336497    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:32.348399    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:32.348408    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:32.360187    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:32.360197    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:32.371707    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:32.371717    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:32.383745    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:32.383754    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:32.407376    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:32.407386    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:32.424805    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:32.424816    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:32.439704    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:32.439714    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:32.452174    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:32.452184    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:32.488096    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:32.488112    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:32.508269    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:32.508280    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:32.522819    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:32.522833    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:32.534457    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:32.534467    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:32.539374    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:32.539381    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:35.059446    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:36.308439    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:36.308575    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:36.322741    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:36.322818    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:36.334342    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:36.334405    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:36.345340    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:36.345412    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:36.355964    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:36.356025    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:36.366348    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:36.366413    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:36.376973    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:36.377041    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:36.386889    8672 logs.go:276] 0 containers: []
	W0731 12:35:36.386900    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:36.386961    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:36.397367    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:36.397383    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:36.397388    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:36.433630    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:36.433644    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:36.448257    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:36.448267    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:36.459893    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:36.459908    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:36.471227    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:36.471240    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:36.482958    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:36.482970    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:36.509213    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:36.509222    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:36.521558    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:36.521572    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:36.532961    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:36.532971    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:36.544943    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:36.544953    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:36.560349    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:36.560359    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:36.594705    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:36.594714    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:36.612775    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:36.612788    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:36.617149    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:36.617159    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:36.631062    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:36.631072    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:39.145375    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:40.061761    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:40.061902    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:40.073570    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:40.073640    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:40.086657    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:40.086735    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:40.097872    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:40.097938    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:40.108730    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:40.108803    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:40.119631    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:40.119705    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:40.130797    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:40.130866    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:40.141218    8683 logs.go:276] 0 containers: []
	W0731 12:35:40.141229    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:40.141284    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:40.151882    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:40.151900    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:40.151906    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:40.175234    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:40.175241    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:40.187353    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:40.187364    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:40.199472    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:40.199484    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:40.214166    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:40.214179    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:40.228370    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:40.228382    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:40.240171    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:40.240181    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:40.252738    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:40.252749    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:40.267831    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:40.267843    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:40.286129    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:40.286142    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:40.323158    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:40.323165    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:40.358922    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:40.358938    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:40.371286    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:40.371301    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:40.376283    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:40.376290    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:40.388245    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:40.388255    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:44.147635    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:44.147805    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:44.163345    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:44.163427    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:44.175374    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:44.175444    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:44.186493    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:44.186570    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:44.196698    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:44.196767    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:44.207002    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:44.207077    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:44.217275    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:44.217340    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:44.227005    8672 logs.go:276] 0 containers: []
	W0731 12:35:44.227021    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:44.227087    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:44.237427    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:44.237443    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:44.237449    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:44.254263    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:44.254274    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:44.287477    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:44.287485    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:44.292105    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:44.292114    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:44.306528    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:44.306538    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:44.318061    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:44.318071    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:44.352770    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:44.352780    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:44.364732    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:44.364743    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:44.376534    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:44.376545    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:44.388506    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:44.388519    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:44.400160    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:44.400172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:44.411740    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:44.411752    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:44.426029    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:44.426040    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:44.437836    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:44.437849    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:44.452481    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:44.452491    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:42.902183    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:46.977908    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:47.904607    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:47.904833    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:47.932814    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:47.932908    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:47.949217    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:47.949289    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:47.961433    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:47.961506    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:47.972237    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:47.972304    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:47.982909    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:47.982978    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:47.993481    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:47.993549    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:48.004320    8683 logs.go:276] 0 containers: []
	W0731 12:35:48.004332    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:48.004392    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:48.015843    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:48.015862    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:48.015868    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:48.030255    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:48.030264    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:48.042451    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:48.042460    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:48.059873    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:48.059884    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:48.072152    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:48.072164    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:48.108446    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:48.108454    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:48.120008    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:48.120019    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:48.132237    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:48.132246    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:48.168537    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:48.168547    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:48.188514    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:48.188524    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:48.213191    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:48.213199    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:48.228704    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:48.228714    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:48.240659    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:48.240669    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:48.245419    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:48.245425    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:48.259495    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:48.259505    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:50.779487    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:51.980078    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:51.980185    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:51.991900    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:51.991971    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:52.002758    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:52.002828    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:52.015892    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:52.015958    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:52.026375    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:52.026446    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:52.036325    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:52.036389    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:52.047405    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:52.047471    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:52.057331    8672 logs.go:276] 0 containers: []
	W0731 12:35:52.057340    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:52.057397    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:52.067871    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:52.067888    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:52.067893    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:52.101556    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:52.101564    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:52.117269    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:52.117281    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:52.133402    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:52.133411    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:52.158516    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:52.158525    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:52.173675    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:52.173685    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:52.187581    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:52.187592    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:52.201244    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:52.201255    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:52.220313    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:52.220330    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:52.231670    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:52.231681    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:52.243687    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:52.243700    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:52.247950    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:52.247957    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:52.281469    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:52.281484    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:52.300178    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:52.300191    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:52.312624    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:52.312638    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:54.825809    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:55.781733    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:55.782093    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:55.819446    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:35:55.819582    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:55.840883    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:35:55.840977    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:55.856594    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:35:55.856676    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:55.870202    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:35:55.870274    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:55.881455    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:35:55.881521    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:55.892662    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:35:55.892732    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:55.903040    8683 logs.go:276] 0 containers: []
	W0731 12:35:55.903052    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:55.903111    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:55.913565    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:35:55.913583    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:35:55.913588    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:35:55.929341    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:35:55.929350    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:35:55.940670    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:35:55.940679    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:35:55.957971    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:55.957982    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:55.962894    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:35:55.962909    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:35:55.975636    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:55.975650    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:56.001650    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:35:56.001660    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:56.015334    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:56.015345    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:56.053503    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:35:56.053512    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:35:56.067402    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:35:56.067412    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:35:56.086691    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:56.086701    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:56.121444    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:35:56.121456    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:35:56.136160    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:35:56.136173    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:35:56.151687    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:35:56.151700    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:35:56.167245    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:35:56.167258    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:35:59.828383    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:59.828581    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:59.843328    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:59.843410    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:59.855275    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:59.855352    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:59.867013    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:59.867092    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:59.878707    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:59.878778    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:59.889621    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:59.889695    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:59.901380    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:59.901451    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:59.912091    8672 logs.go:276] 0 containers: []
	W0731 12:35:59.912104    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:59.912167    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:59.927656    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:59.927672    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:59.927678    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:59.946702    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:59.946712    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:59.951845    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:59.951851    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:59.990559    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:59.990570    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:00.002457    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:00.002469    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:00.015081    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:00.015093    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:00.027855    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:00.027869    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:00.047323    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:00.047338    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:00.063259    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:00.063274    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:00.078285    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:00.078298    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:00.096377    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:00.096386    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:00.107798    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:00.107809    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:00.119346    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:00.119356    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:00.155057    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:00.155065    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:00.172143    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:00.172153    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:58.685783    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:02.698123    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:03.688149    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:03.688439    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:03.720564    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:03.720698    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:03.740441    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:03.740538    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:03.757465    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:03.757550    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:03.771686    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:03.771765    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:03.783644    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:03.783720    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:03.794447    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:03.794520    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:03.805777    8683 logs.go:276] 0 containers: []
	W0731 12:36:03.805789    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:03.805848    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:03.817841    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:03.817857    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:03.817862    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:03.829674    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:03.829687    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:03.868433    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:03.868442    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:03.903624    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:03.903636    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:03.917714    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:03.917727    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:03.922668    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:03.922677    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:03.937962    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:03.937975    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:03.956400    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:03.956411    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:03.968063    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:03.968074    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:03.979665    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:03.979675    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:04.005839    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:04.005853    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:04.019798    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:04.019811    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:04.032273    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:04.032285    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:04.059972    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:04.059985    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:04.074195    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:04.074206    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:07.700432    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:07.700593    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:07.714260    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:07.714341    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:07.725010    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:07.725077    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:07.735506    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:07.735585    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:07.750397    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:07.750471    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:07.760737    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:07.760805    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:07.771222    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:07.771285    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:07.781349    8672 logs.go:276] 0 containers: []
	W0731 12:36:07.781359    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:07.781421    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:07.791428    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:07.791443    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:07.791448    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:07.803472    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:07.803483    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:07.808037    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:07.808044    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:07.820902    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:07.820912    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:07.832359    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:07.832370    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:07.852944    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:07.852955    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:07.867097    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:07.867109    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:07.883802    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:07.883812    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:07.899662    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:07.899672    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:07.911924    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:07.911936    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:07.923574    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:07.923586    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:07.947202    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:07.947211    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:07.980870    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:07.980880    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:08.029424    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:08.029435    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:08.041311    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:08.041322    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:10.561079    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:06.587524    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:15.563387    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:15.563544    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:15.580324    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:15.580407    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:15.592249    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:15.592322    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:15.603050    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:15.603119    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:15.613814    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:15.613884    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:15.624983    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:15.625055    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:15.640565    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:15.640635    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:15.653774    8672 logs.go:276] 0 containers: []
	W0731 12:36:15.653784    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:15.653839    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:15.663697    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:15.663714    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:15.663719    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:15.675251    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:15.675262    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:15.687253    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:15.687264    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:15.699030    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:15.699041    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:15.711004    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:15.711019    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:15.728506    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:15.728515    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:15.752194    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:15.752202    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:15.764043    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:15.764053    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:15.775215    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:15.775224    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:15.786625    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:15.786636    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:15.791666    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:15.791673    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:15.825830    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:15.825843    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:15.841057    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:15.841067    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:15.856271    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:15.856280    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:15.870951    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:15.870962    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:11.589859    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:11.589982    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:11.606383    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:11.606463    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:11.619253    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:11.619313    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:11.630508    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:11.630566    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:11.641325    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:11.641394    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:11.652098    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:11.652163    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:11.662630    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:11.662699    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:11.673356    8683 logs.go:276] 0 containers: []
	W0731 12:36:11.673367    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:11.673428    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:11.687021    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:11.687040    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:11.687045    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:11.711673    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:11.711683    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:11.734871    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:11.734878    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:11.747226    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:11.747238    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:11.759283    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:11.759293    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:11.771095    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:11.771105    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:11.783206    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:11.783217    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:11.795029    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:11.795040    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:11.813559    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:11.813573    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:11.825622    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:11.825632    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:11.862064    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:11.862075    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:11.876306    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:11.876319    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:11.914707    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:11.914715    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:11.929708    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:11.929720    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:11.934651    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:11.934657    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:14.451769    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:18.408592    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:19.454051    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:19.454238    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:19.465801    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:19.465883    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:19.480452    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:19.480522    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:19.490837    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:19.490916    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:19.502098    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:19.502171    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:19.517037    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:19.517109    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:19.527170    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:19.527243    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:19.537928    8683 logs.go:276] 0 containers: []
	W0731 12:36:19.537949    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:19.538010    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:19.549025    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:19.549043    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:19.549048    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:19.572907    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:19.572917    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:19.577508    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:19.577516    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:19.591473    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:19.591487    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:19.608301    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:19.608313    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:19.620212    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:19.620226    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:19.658174    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:19.658185    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:19.674529    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:19.674539    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:19.691877    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:19.691887    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:19.725956    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:19.725967    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:19.739941    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:19.739951    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:19.751657    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:19.751666    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:19.765031    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:19.765043    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:19.777095    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:19.777106    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:19.788517    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:19.788528    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:23.410921    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:23.411168    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:23.436184    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:23.436303    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:23.452827    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:23.452906    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:23.472025    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:23.472094    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:23.482159    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:23.482230    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:23.492878    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:23.492946    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:23.509489    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:23.509559    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:23.520062    8672 logs.go:276] 0 containers: []
	W0731 12:36:23.520073    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:23.520131    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:23.530888    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:23.530903    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:23.530909    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:23.566434    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:23.566448    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:23.578652    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:23.578664    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:23.590428    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:23.590442    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:23.626548    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:23.626555    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:23.631267    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:23.631272    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:23.648604    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:23.648614    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:23.660527    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:23.660543    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:23.682578    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:23.682589    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:23.696835    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:23.696845    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:23.709052    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:23.709064    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:23.720854    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:23.720863    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:23.732585    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:23.732595    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:23.747305    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:23.747315    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:23.759396    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:23.759406    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:22.302557    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:26.286524    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:27.304812    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:27.304953    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:27.320116    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:27.320205    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:27.332668    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:27.332744    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:27.343629    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:27.343700    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:27.355138    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:27.355207    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:27.365769    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:27.365841    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:27.385094    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:27.385160    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:27.395271    8683 logs.go:276] 0 containers: []
	W0731 12:36:27.395284    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:27.395341    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:27.405340    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:27.405358    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:27.405364    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:27.418844    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:27.418855    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:27.432797    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:27.432812    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:27.534509    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:27.534523    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:27.550837    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:27.550849    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:27.570249    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:27.570260    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:27.582227    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:27.582237    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:27.598530    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:27.598543    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:27.611375    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:27.611391    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:27.629289    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:27.629303    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:27.641245    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:27.641256    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:27.658118    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:27.658132    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:27.669311    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:27.669325    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:27.693503    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:27.693512    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:27.731082    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:27.731088    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:30.236063    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:31.287294    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:31.287462    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:31.305055    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:31.305140    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:31.318009    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:31.318082    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:31.329392    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:31.329469    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:31.340913    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:31.340981    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:31.351106    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:31.351171    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:31.361218    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:31.361276    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:31.370710    8672 logs.go:276] 0 containers: []
	W0731 12:36:31.370720    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:31.370771    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:31.380850    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:31.380870    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:31.380875    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:31.397176    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:31.397186    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:31.408442    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:31.408452    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:31.423651    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:31.423662    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:31.435077    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:31.435087    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:31.460162    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:31.460172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:31.494773    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:31.494781    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:31.513090    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:31.513101    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:31.534394    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:31.534405    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:31.539124    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:31.539131    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:31.573811    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:31.573822    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:31.585610    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:31.585619    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:31.600305    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:31.600316    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:31.612277    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:31.612291    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:31.627052    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:31.627063    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:34.140799    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:35.236349    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:35.236617    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:35.265244    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:35.265374    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:35.283535    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:35.283634    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:35.297520    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:35.297589    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:35.309393    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:35.309463    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:35.320010    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:35.320068    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:35.330901    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:35.330973    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:35.341211    8683 logs.go:276] 0 containers: []
	W0731 12:36:35.341222    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:35.341272    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:35.352237    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:35.352254    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:35.352259    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:35.369773    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:35.369787    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:35.381109    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:35.381119    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:35.396395    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:35.396406    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:35.409222    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:35.409231    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:35.421027    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:35.421036    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:35.432768    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:35.432777    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:35.457159    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:35.457166    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:35.468897    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:35.468908    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:35.507295    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:35.507308    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:35.526820    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:35.526831    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:35.538960    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:35.538973    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:35.554580    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:35.554595    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:35.559631    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:35.559639    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:35.596771    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:35.596784    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:39.142601    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:39.146874    8672 out.go:177] 
	W0731 12:36:39.149950    8672 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:36:39.149956    8672 out.go:239] * 
	W0731 12:36:39.150409    8672 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:36:39.165833    8672 out.go:177] 
	I0731 12:36:38.109003    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:43.111168    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:43.111380    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:43.129435    8683 logs.go:276] 1 containers: [cdf9cb262bfb]
	I0731 12:36:43.129524    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:43.144313    8683 logs.go:276] 1 containers: [2c68c2eec108]
	I0731 12:36:43.144388    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:43.155634    8683 logs.go:276] 4 containers: [881a3284271e f77c021bc198 dbcb1acc77fa 8152fa50c3e3]
	I0731 12:36:43.155711    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:43.166731    8683 logs.go:276] 1 containers: [03ac31dacf44]
	I0731 12:36:43.166797    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:43.180817    8683 logs.go:276] 1 containers: [a4e7d273cebe]
	I0731 12:36:43.180889    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:43.192421    8683 logs.go:276] 1 containers: [2884f95bf986]
	I0731 12:36:43.192490    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:43.202894    8683 logs.go:276] 0 containers: []
	W0731 12:36:43.202909    8683 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:43.202967    8683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:43.213938    8683 logs.go:276] 1 containers: [338b3b9b98fc]
	I0731 12:36:43.213958    8683 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:43.213963    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:43.236960    8683 logs.go:123] Gathering logs for coredns [dbcb1acc77fa] ...
	I0731 12:36:43.236967    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbcb1acc77fa"
	I0731 12:36:43.248734    8683 logs.go:123] Gathering logs for etcd [2c68c2eec108] ...
	I0731 12:36:43.248747    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c68c2eec108"
	I0731 12:36:43.262610    8683 logs.go:123] Gathering logs for coredns [881a3284271e] ...
	I0731 12:36:43.262624    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 881a3284271e"
	I0731 12:36:43.274091    8683 logs.go:123] Gathering logs for kube-scheduler [03ac31dacf44] ...
	I0731 12:36:43.274104    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03ac31dacf44"
	I0731 12:36:43.289321    8683 logs.go:123] Gathering logs for kube-apiserver [cdf9cb262bfb] ...
	I0731 12:36:43.289334    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf9cb262bfb"
	I0731 12:36:43.303672    8683 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:43.303684    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:43.341000    8683 logs.go:123] Gathering logs for coredns [f77c021bc198] ...
	I0731 12:36:43.341011    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f77c021bc198"
	I0731 12:36:43.353025    8683 logs.go:123] Gathering logs for kube-proxy [a4e7d273cebe] ...
	I0731 12:36:43.353039    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e7d273cebe"
	I0731 12:36:43.365030    8683 logs.go:123] Gathering logs for storage-provisioner [338b3b9b98fc] ...
	I0731 12:36:43.365046    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 338b3b9b98fc"
	I0731 12:36:43.376560    8683 logs.go:123] Gathering logs for container status ...
	I0731 12:36:43.376571    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:43.388390    8683 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:43.388402    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:43.427016    8683 logs.go:123] Gathering logs for coredns [8152fa50c3e3] ...
	I0731 12:36:43.427024    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8152fa50c3e3"
	I0731 12:36:43.441874    8683 logs.go:123] Gathering logs for kube-controller-manager [2884f95bf986] ...
	I0731 12:36:43.441884    8683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2884f95bf986"
	I0731 12:36:43.459326    8683 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:43.459337    8683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:45.965865    8683 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:50.968180    8683 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:50.973586    8683 out.go:177] 
	W0731 12:36:50.977525    8683 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:36:50.977531    8683 out.go:239] * 
	W0731 12:36:50.978025    8683 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:36:50.985549    8683 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-31 19:27:39 UTC, ends at Wed 2024-07-31 19:37:07 UTC. --
	Jul 31 19:36:50 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:50Z" level=error msg="ContainerStats resp: {<nil> }"
	Jul 31 19:36:50 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:50Z" level=error msg="Error response from daemon: No such container: dbcb1acc77fabb6b4f8ca2766e30370614db901e1085ee7959508c7ac707f966 Failed to get stats from container dbcb1acc77fabb6b4f8ca2766e30370614db901e1085ee7959508c7ac707f966"
	Jul 31 19:36:50 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:50Z" level=error msg="ContainerStats resp: {0x4000415a00 linux}"
	Jul 31 19:36:50 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:50Z" level=error msg="ContainerStats resp: {0x40004a50c0 linux}"
	Jul 31 19:36:51 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:51Z" level=error msg="ContainerStats resp: {0x4000ab7ac0 linux}"
	Jul 31 19:36:52 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:52Z" level=error msg="ContainerStats resp: {0x4000879cc0 linux}"
	Jul 31 19:36:52 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:52Z" level=error msg="ContainerStats resp: {0x4000743f40 linux}"
	Jul 31 19:36:52 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:52Z" level=error msg="ContainerStats resp: {0x4000958780 linux}"
	Jul 31 19:36:52 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:52Z" level=error msg="ContainerStats resp: {0x4000958bc0 linux}"
	Jul 31 19:36:52 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:52Z" level=error msg="ContainerStats resp: {0x4000959440 linux}"
	Jul 31 19:36:52 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:52Z" level=error msg="ContainerStats resp: {0x4000359a40 linux}"
	Jul 31 19:36:52 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:52Z" level=error msg="ContainerStats resp: {0x4000836040 linux}"
	Jul 31 19:36:53 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:36:58 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:36:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:37:02 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:02Z" level=error msg="ContainerStats resp: {0x40008366c0 linux}"
	Jul 31 19:37:02 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:02Z" level=error msg="ContainerStats resp: {0x4000836800 linux}"
	Jul 31 19:37:03 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:03Z" level=error msg="ContainerStats resp: {0x4000879180 linux}"
	Jul 31 19:37:03 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:37:04 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:04Z" level=error msg="ContainerStats resp: {0x40007423c0 linux}"
	Jul 31 19:37:04 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:04Z" level=error msg="ContainerStats resp: {0x40008786c0 linux}"
	Jul 31 19:37:04 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:04Z" level=error msg="ContainerStats resp: {0x4000878b00 linux}"
	Jul 31 19:37:04 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:04Z" level=error msg="ContainerStats resp: {0x4000878f80 linux}"
	Jul 31 19:37:04 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:04Z" level=error msg="ContainerStats resp: {0x4000879500 linux}"
	Jul 31 19:37:04 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:04Z" level=error msg="ContainerStats resp: {0x4000879680 linux}"
	Jul 31 19:37:04 running-upgrade-568000 cri-dockerd[4316]: time="2024-07-31T19:37:04Z" level=error msg="ContainerStats resp: {0x40009581c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5b2912cf54614       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   0f37100c89427
	d3eba21e9e5eb       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   acc60651dd4bb
	881a3284271e1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   acc60651dd4bb
	f77c021bc198e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0f37100c89427
	a4e7d273cebe8       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ca484525c3c6d
	338b3b9b98fc6       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   5ba548394b228
	cdf9cb262bfbc       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   d8b4e90064e09
	2c68c2eec1080       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   92b6226633ec7
	2884f95bf9867       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   a12a49a1c5e6f
	03ac31dacf44c       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   c0d548a66943c
	
	
	==> coredns [5b2912cf5461] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8093071001975953467.5034862202836518656. HINFO: read udp 10.244.0.2:46997->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8093071001975953467.5034862202836518656. HINFO: read udp 10.244.0.2:37291->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8093071001975953467.5034862202836518656. HINFO: read udp 10.244.0.2:51894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8093071001975953467.5034862202836518656. HINFO: read udp 10.244.0.2:58728->10.0.2.3:53: i/o timeout
	
	
	==> coredns [881a3284271e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:32833->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:39855->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:51513->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:54088->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:46599->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:46667->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:53549->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:60381->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:48431->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2483263200192326875.2852944060699839085. HINFO: read udp 10.244.0.3:34351->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d3eba21e9e5e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2220480930258733568.7522780899747780152. HINFO: read udp 10.244.0.3:54765->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2220480930258733568.7522780899747780152. HINFO: read udp 10.244.0.3:40633->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2220480930258733568.7522780899747780152. HINFO: read udp 10.244.0.3:40899->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2220480930258733568.7522780899747780152. HINFO: read udp 10.244.0.3:52829->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f77c021bc198] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:35097->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:39564->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:60258->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:35085->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:35580->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:39727->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:50350->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:48894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:37195->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2536833687361855877.2137984382432514445. HINFO: read udp 10.244.0.2:39509->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-568000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-568000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=running-upgrade-568000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T12_32_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:32:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-568000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:37:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:32:48 +0000   Wed, 31 Jul 2024 19:32:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:32:48 +0000   Wed, 31 Jul 2024 19:32:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:32:48 +0000   Wed, 31 Jul 2024 19:32:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:32:48 +0000   Wed, 31 Jul 2024 19:32:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-568000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a2025a930b847c198c0699bd872c12d
	  System UUID:                5a2025a930b847c198c0699bd872c12d
	  Boot ID:                    204b9b1d-6156-4000-87b2-2ee49859d26f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5jqmj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m6s
	  kube-system                 coredns-6d4b75cb6d-vt7ml                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m6s
	  kube-system                 etcd-running-upgrade-568000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-568000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-568000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-gs625                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-running-upgrade-568000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m24s (x4 over 4m25s)  kubelet          Node running-upgrade-568000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x3 over 4m25s)  kubelet          Node running-upgrade-568000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x3 over 4m25s)  kubelet          Node running-upgrade-568000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m19s                  kubelet          Node running-upgrade-568000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m19s                  kubelet          Node running-upgrade-568000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s                  kubelet          Node running-upgrade-568000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m19s                  kubelet          Node running-upgrade-568000 status is now: NodeReady
	  Normal  Starting                 4m19s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m6s                   node-controller  Node running-upgrade-568000 event: Registered Node running-upgrade-568000 in Controller
	
	
	==> dmesg <==
	[  +0.085082] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.142192] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.090321] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.082987] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.676901] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[Jul31 19:28] systemd-fstab-generator[1935]: Ignoring "noauto" for root device
	[ +13.665323] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.722485] systemd-fstab-generator[2630]: Ignoring "noauto" for root device
	[  +0.198079] systemd-fstab-generator[2709]: Ignoring "noauto" for root device
	[  +0.109303] systemd-fstab-generator[2722]: Ignoring "noauto" for root device
	[  +0.103034] systemd-fstab-generator[2735]: Ignoring "noauto" for root device
	[  +5.327899] kauditd_printk_skb: 14 callbacks suppressed
	[  +2.620466] systemd-fstab-generator[4273]: Ignoring "noauto" for root device
	[  +0.094326] systemd-fstab-generator[4284]: Ignoring "noauto" for root device
	[  +0.072271] systemd-fstab-generator[4295]: Ignoring "noauto" for root device
	[  +0.084890] systemd-fstab-generator[4309]: Ignoring "noauto" for root device
	[  +2.618601] systemd-fstab-generator[4665]: Ignoring "noauto" for root device
	[  +1.166897] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.196559] systemd-fstab-generator[5005]: Ignoring "noauto" for root device
	[  +1.274623] systemd-fstab-generator[5151]: Ignoring "noauto" for root device
	[  +3.025992] kauditd_printk_skb: 29 callbacks suppressed
	[ +15.487962] kauditd_printk_skb: 3 callbacks suppressed
	[Jul31 19:32] systemd-fstab-generator[13867]: Ignoring "noauto" for root device
	[  +5.612117] systemd-fstab-generator[14463]: Ignoring "noauto" for root device
	[  +0.473782] systemd-fstab-generator[14616]: Ignoring "noauto" for root device
	
	
	==> etcd [2c68c2eec108] <==
	{"level":"info","ts":"2024-07-31T19:32:43.922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-31T19:32:43.922Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-31T19:32:43.923Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T19:32:43.923Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T19:32:43.924Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T19:32:43.924Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T19:32:43.924Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T19:32:44.673Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:32:44.674Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:32:44.674Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:32:44.674Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:32:44.674Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-568000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:32:44.674Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:32:44.675Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:32:44.675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:32:44.675Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:32:44.675Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-31T19:32:44.675Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:37:07 up 9 min,  0 users,  load average: 0.20, 0.32, 0.19
	Linux running-upgrade-568000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [cdf9cb262bfb] <==
	I0731 19:32:45.898791       1 controller.go:611] quota admission added evaluator for: namespaces
	I0731 19:32:45.916543       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:32:45.916578       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 19:32:45.916586       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:32:45.916591       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 19:32:45.917003       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:32:45.919999       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 19:32:45.935522       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:32:46.648144       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 19:32:46.820140       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 19:32:46.822318       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 19:32:46.822334       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:32:46.963682       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:32:46.973309       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:32:47.002658       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 19:32:47.004781       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0731 19:32:47.005226       1 controller.go:611] quota admission added evaluator for: endpoints
	I0731 19:32:47.006559       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:32:47.952410       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 19:32:48.385263       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 19:32:48.390952       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 19:32:48.427979       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 19:33:01.356379       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0731 19:33:01.755381       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0731 19:33:02.537821       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [2884f95bf986] <==
	I0731 19:33:01.002634       1 shared_informer.go:262] Caches are synced for cronjob
	I0731 19:33:01.009266       1 shared_informer.go:262] Caches are synced for service account
	I0731 19:33:01.067274       1 shared_informer.go:262] Caches are synced for namespace
	I0731 19:33:01.070413       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 19:33:01.103944       1 shared_informer.go:262] Caches are synced for taint
	I0731 19:33:01.104061       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0731 19:33:01.104088       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0731 19:33:01.104031       1 shared_informer.go:262] Caches are synced for persistent volume
	W0731 19:33:01.104154       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-568000. Assuming now as a timestamp.
	I0731 19:33:01.104199       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0731 19:33:01.104037       1 shared_informer.go:262] Caches are synced for ephemeral
	I0731 19:33:01.104413       1 event.go:294] "Event occurred" object="running-upgrade-568000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-568000 event: Registered Node running-upgrade-568000 in Controller"
	I0731 19:33:01.153692       1 shared_informer.go:262] Caches are synced for expand
	I0731 19:33:01.153817       1 shared_informer.go:262] Caches are synced for attach detach
	I0731 19:33:01.165372       1 shared_informer.go:262] Caches are synced for PVC protection
	I0731 19:33:01.173710       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 19:33:01.174823       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:33:01.191067       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:33:01.358386       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0731 19:33:01.610874       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 19:33:01.682236       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 19:33:01.682277       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 19:33:01.758088       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gs625"
	I0731 19:33:01.806521       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vt7ml"
	I0731 19:33:01.810595       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5jqmj"
	
	
	==> kube-proxy [a4e7d273cebe] <==
	I0731 19:33:02.526577       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0731 19:33:02.526600       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0731 19:33:02.526609       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 19:33:02.535564       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 19:33:02.535574       1 server_others.go:206] "Using iptables Proxier"
	I0731 19:33:02.535588       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 19:33:02.535697       1 server.go:661] "Version info" version="v1.24.1"
	I0731 19:33:02.535706       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:33:02.535958       1 config.go:317] "Starting service config controller"
	I0731 19:33:02.535969       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 19:33:02.536008       1 config.go:226] "Starting endpoint slice config controller"
	I0731 19:33:02.536013       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 19:33:02.536856       1 config.go:444] "Starting node config controller"
	I0731 19:33:02.536879       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 19:33:02.639630       1 shared_informer.go:262] Caches are synced for node config
	I0731 19:33:02.639648       1 shared_informer.go:262] Caches are synced for service config
	I0731 19:33:02.639681       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03ac31dacf44] <==
	W0731 19:32:45.864638       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:32:45.864642       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 19:32:45.864656       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 19:32:45.864659       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 19:32:45.864669       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 19:32:45.864671       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 19:32:45.864683       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:32:45.864690       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:32:45.864701       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:32:45.864704       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:32:45.864713       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:32:45.864716       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:32:46.681716       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:32:46.681735       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:32:46.747262       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:32:46.747279       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:32:46.764371       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:32:46.764383       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:32:46.771190       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 19:32:46.771203       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 19:32:46.776671       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 19:32:46.776711       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 19:32:46.869988       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:32:46.870088       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 19:32:49.659067       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-31 19:27:39 UTC, ends at Wed 2024-07-31 19:37:07 UTC. --
	Jul 31 19:32:50 running-upgrade-568000 kubelet[14486]: E0731 19:32:50.215349   14486 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-568000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-568000"
	Jul 31 19:32:50 running-upgrade-568000 kubelet[14486]: E0731 19:32:50.414989   14486 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-568000\" already exists" pod="kube-system/etcd-running-upgrade-568000"
	Jul 31 19:32:50 running-upgrade-568000 kubelet[14486]: I0731 19:32:50.613293   14486 request.go:601] Waited for 1.11079622s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 31 19:32:50 running-upgrade-568000 kubelet[14486]: E0731 19:32:50.616398   14486 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-568000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-568000"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.020235   14486 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.020653   14486 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.109474   14486 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.223993   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hvq7\" (UniqueName: \"kubernetes.io/projected/030c1167-2c15-44f4-aedf-a6de572326c5-kube-api-access-5hvq7\") pod \"storage-provisioner\" (UID: \"030c1167-2c15-44f4-aedf-a6de572326c5\") " pod="kube-system/storage-provisioner"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.224032   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/030c1167-2c15-44f4-aedf-a6de572326c5-tmp\") pod \"storage-provisioner\" (UID: \"030c1167-2c15-44f4-aedf-a6de572326c5\") " pod="kube-system/storage-provisioner"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: E0731 19:33:01.329041   14486 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: E0731 19:33:01.329064   14486 projected.go:192] Error preparing data for projected volume kube-api-access-5hvq7 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: E0731 19:33:01.329107   14486 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/030c1167-2c15-44f4-aedf-a6de572326c5-kube-api-access-5hvq7 podName:030c1167-2c15-44f4-aedf-a6de572326c5 nodeName:}" failed. No retries permitted until 2024-07-31 19:33:01.829091664 +0000 UTC m=+13.453308717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5hvq7" (UniqueName: "kubernetes.io/projected/030c1167-2c15-44f4-aedf-a6de572326c5-kube-api-access-5hvq7") pod "storage-provisioner" (UID: "030c1167-2c15-44f4-aedf-a6de572326c5") : configmap "kube-root-ca.crt" not found
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.760967   14486 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.809489   14486 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.812837   14486 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933642   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd-lib-modules\") pod \"kube-proxy-gs625\" (UID: \"da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd\") " pod="kube-system/kube-proxy-gs625"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933669   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33c8dc31-2089-474e-98f8-040d8419ebf5-config-volume\") pod \"coredns-6d4b75cb6d-5jqmj\" (UID: \"33c8dc31-2089-474e-98f8-040d8419ebf5\") " pod="kube-system/coredns-6d4b75cb6d-5jqmj"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933684   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4wqn\" (UniqueName: \"kubernetes.io/projected/da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd-kube-api-access-x4wqn\") pod \"kube-proxy-gs625\" (UID: \"da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd\") " pod="kube-system/kube-proxy-gs625"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933695   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g49ds\" (UniqueName: \"kubernetes.io/projected/9bbcd81b-b210-466d-9768-1a201f5b1e64-kube-api-access-g49ds\") pod \"coredns-6d4b75cb6d-vt7ml\" (UID: \"9bbcd81b-b210-466d-9768-1a201f5b1e64\") " pod="kube-system/coredns-6d4b75cb6d-vt7ml"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933724   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlt8d\" (UniqueName: \"kubernetes.io/projected/33c8dc31-2089-474e-98f8-040d8419ebf5-kube-api-access-hlt8d\") pod \"coredns-6d4b75cb6d-5jqmj\" (UID: \"33c8dc31-2089-474e-98f8-040d8419ebf5\") " pod="kube-system/coredns-6d4b75cb6d-5jqmj"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933737   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bbcd81b-b210-466d-9768-1a201f5b1e64-config-volume\") pod \"coredns-6d4b75cb6d-vt7ml\" (UID: \"9bbcd81b-b210-466d-9768-1a201f5b1e64\") " pod="kube-system/coredns-6d4b75cb6d-vt7ml"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933746   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd-kube-proxy\") pod \"kube-proxy-gs625\" (UID: \"da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd\") " pod="kube-system/kube-proxy-gs625"
	Jul 31 19:33:01 running-upgrade-568000 kubelet[14486]: I0731 19:33:01.933756   14486 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd-xtables-lock\") pod \"kube-proxy-gs625\" (UID: \"da4bc0c3-ed63-4a64-9c5f-0f162f43a5bd\") " pod="kube-system/kube-proxy-gs625"
	Jul 31 19:36:50 running-upgrade-568000 kubelet[14486]: I0731 19:36:50.327476   14486 scope.go:110] "RemoveContainer" containerID="dbcb1acc77fabb6b4f8ca2766e30370614db901e1085ee7959508c7ac707f966"
	Jul 31 19:36:50 running-upgrade-568000 kubelet[14486]: I0731 19:36:50.350078   14486 scope.go:110] "RemoveContainer" containerID="8152fa50c3e3aad1e11bf212a457c598caf0938ccad50d22bd971cc44d64f91b"
	
	
	==> storage-provisioner [338b3b9b98fc] <==
	I0731 19:33:02.173912       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 19:33:02.177943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 19:33:02.177962       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 19:33:02.181005       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 19:33:02.181156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-568000_c70e169e-c315-4319-9a76-7e271387f69b!
	I0731 19:33:02.181404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f31ae21-5eb3-4c2b-a431-5028b1d6424f", APIVersion:"v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-568000_c70e169e-c315-4319-9a76-7e271387f69b became leader
	I0731 19:33:02.282118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-568000_c70e169e-c315-4319-9a76-7e271387f69b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-568000 -n running-upgrade-568000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-568000 -n running-upgrade-568000: exit status 2 (15.731371916s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-568000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-568000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-568000
--- FAIL: TestRunningBinaryUpgrade (621.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.9572125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-490000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-490000" primary control-plane node in "kubernetes-upgrade-490000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-490000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:26:45.043866    8586 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:26:45.044002    8586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:45.044005    8586 out.go:304] Setting ErrFile to fd 2...
	I0731 12:26:45.044008    8586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:45.044126    8586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:26:45.045182    8586 out.go:298] Setting JSON to false
	I0731 12:26:45.061568    8586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5174,"bootTime":1722448831,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:26:45.061644    8586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:26:45.065455    8586 out.go:177] * [kubernetes-upgrade-490000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:26:45.073355    8586 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:26:45.073429    8586 notify.go:220] Checking for updates...
	I0731 12:26:45.080239    8586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:26:45.083371    8586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:26:45.086471    8586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:26:45.089256    8586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:26:45.092364    8586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:26:45.095741    8586 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:26:45.095802    8586 config.go:182] Loaded profile config "offline-docker-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:26:45.095858    8586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:26:45.100257    8586 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:26:45.107391    8586 start.go:297] selected driver: qemu2
	I0731 12:26:45.107398    8586 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:26:45.107404    8586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:26:45.109703    8586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:26:45.114319    8586 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:26:45.117354    8586 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:26:45.117370    8586 cni.go:84] Creating CNI manager for ""
	I0731 12:26:45.117377    8586 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:26:45.117398    8586 start.go:340] cluster config:
	{Name:kubernetes-upgrade-490000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:26:45.121042    8586 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:45.132307    8586 out.go:177] * Starting "kubernetes-upgrade-490000" primary control-plane node in "kubernetes-upgrade-490000" cluster
	I0731 12:26:45.137750    8586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:26:45.137766    8586 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:26:45.137777    8586 cache.go:56] Caching tarball of preloaded images
	I0731 12:26:45.137836    8586 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:26:45.137841    8586 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:26:45.137891    8586 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/kubernetes-upgrade-490000/config.json ...
	I0731 12:26:45.137901    8586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/kubernetes-upgrade-490000/config.json: {Name:mkfbad2bab477e5560c6c8f767b71ba285aedb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:26:45.138099    8586 start.go:360] acquireMachinesLock for kubernetes-upgrade-490000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:45.138129    8586 start.go:364] duration metric: took 22.333µs to acquireMachinesLock for "kubernetes-upgrade-490000"
	I0731 12:26:45.138139    8586 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-490000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:26:45.138174    8586 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:26:45.145258    8586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:26:45.162655    8586 start.go:159] libmachine.API.Create for "kubernetes-upgrade-490000" (driver="qemu2")
	I0731 12:26:45.162682    8586 client.go:168] LocalClient.Create starting
	I0731 12:26:45.162752    8586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:26:45.162784    8586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:45.162795    8586 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:45.162847    8586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:26:45.162869    8586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:45.162877    8586 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:45.163486    8586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:26:45.341536    8586 main.go:141] libmachine: Creating SSH key...
	I0731 12:26:45.387664    8586 main.go:141] libmachine: Creating Disk image...
	I0731 12:26:45.387670    8586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:26:45.387850    8586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:26:45.401512    8586 main.go:141] libmachine: STDOUT: 
	I0731 12:26:45.401534    8586 main.go:141] libmachine: STDERR: 
	I0731 12:26:45.401586    8586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2 +20000M
	I0731 12:26:45.409304    8586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:26:45.409317    8586 main.go:141] libmachine: STDERR: 
	I0731 12:26:45.409334    8586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:26:45.409342    8586 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:26:45.409354    8586 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:45.409377    8586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:d1:af:5c:2d:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:26:45.410963    8586 main.go:141] libmachine: STDOUT: 
	I0731 12:26:45.410975    8586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:26:45.411000    8586 client.go:171] duration metric: took 248.316959ms to LocalClient.Create
	I0731 12:26:47.413148    8586 start.go:128] duration metric: took 2.274989041s to createHost
	I0731 12:26:47.413199    8586 start.go:83] releasing machines lock for "kubernetes-upgrade-490000", held for 2.275096208s
	W0731 12:26:47.413288    8586 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:47.427335    8586 out.go:177] * Deleting "kubernetes-upgrade-490000" in qemu2 ...
	W0731 12:26:47.460105    8586 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:47.460132    8586 start.go:729] Will try again in 5 seconds ...
	I0731 12:26:52.462275    8586 start.go:360] acquireMachinesLock for kubernetes-upgrade-490000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:52.572411    8586 start.go:364] duration metric: took 110.042667ms to acquireMachinesLock for "kubernetes-upgrade-490000"
	I0731 12:26:52.572548    8586 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-490000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:26:52.572757    8586 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:26:52.577392    8586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:26:52.623886    8586 start.go:159] libmachine.API.Create for "kubernetes-upgrade-490000" (driver="qemu2")
	I0731 12:26:52.623932    8586 client.go:168] LocalClient.Create starting
	I0731 12:26:52.624042    8586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:26:52.624091    8586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:52.624105    8586 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:52.624163    8586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:26:52.624193    8586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:26:52.624232    8586 main.go:141] libmachine: Parsing certificate...
	I0731 12:26:52.624765    8586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:26:52.818130    8586 main.go:141] libmachine: Creating SSH key...
	I0731 12:26:52.913847    8586 main.go:141] libmachine: Creating Disk image...
	I0731 12:26:52.913853    8586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:26:52.914063    8586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:26:52.923395    8586 main.go:141] libmachine: STDOUT: 
	I0731 12:26:52.923414    8586 main.go:141] libmachine: STDERR: 
	I0731 12:26:52.923460    8586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2 +20000M
	I0731 12:26:52.931180    8586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:26:52.931193    8586 main.go:141] libmachine: STDERR: 
	I0731 12:26:52.931201    8586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:26:52.931206    8586 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:26:52.931216    8586 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:52.931255    8586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4d:fc:b5:da:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:26:52.932897    8586 main.go:141] libmachine: STDOUT: 
	I0731 12:26:52.932910    8586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:26:52.932922    8586 client.go:171] duration metric: took 308.990709ms to LocalClient.Create
	I0731 12:26:54.935218    8586 start.go:128] duration metric: took 2.362429833s to createHost
	I0731 12:26:54.935389    8586 start.go:83] releasing machines lock for "kubernetes-upgrade-490000", held for 2.362985792s
	W0731 12:26:54.935670    8586 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-490000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-490000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:54.944270    8586 out.go:177] 
	W0731 12:26:54.948248    8586 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:26:54.948285    8586 out.go:239] * 
	* 
	W0731 12:26:54.950891    8586 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:26:54.960199    8586 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-490000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-490000: (1.793607916s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-490000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-490000 status --format={{.Host}}: exit status 7 (62.986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.206422458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-490000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-490000" primary control-plane node in "kubernetes-upgrade-490000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:26:56.867066    8626 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:26:56.867179    8626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:56.867182    8626 out.go:304] Setting ErrFile to fd 2...
	I0731 12:26:56.867184    8626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:56.867337    8626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:26:56.868343    8626 out.go:298] Setting JSON to false
	I0731 12:26:56.884504    8626 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5185,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:26:56.884573    8626 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:26:56.888524    8626 out.go:177] * [kubernetes-upgrade-490000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:26:56.897428    8626 notify.go:220] Checking for updates...
	I0731 12:26:56.901399    8626 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:26:56.907331    8626 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:26:56.914342    8626 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:26:56.922413    8626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:26:56.929406    8626 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:26:56.935337    8626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:26:56.939657    8626 config.go:182] Loaded profile config "kubernetes-upgrade-490000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:26:56.939924    8626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:26:56.943310    8626 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:26:56.951399    8626 start.go:297] selected driver: qemu2
	I0731 12:26:56.951405    8626 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-490000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:26:56.951451    8626 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:26:56.953813    8626 cni.go:84] Creating CNI manager for ""
	I0731 12:26:56.953832    8626 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:26:56.953858    8626 start.go:340] cluster config:
	{Name:kubernetes-upgrade-490000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-490000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:26:56.957408    8626 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:56.965436    8626 out.go:177] * Starting "kubernetes-upgrade-490000" primary control-plane node in "kubernetes-upgrade-490000" cluster
	I0731 12:26:56.968383    8626 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:26:56.968398    8626 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:26:56.968410    8626 cache.go:56] Caching tarball of preloaded images
	I0731 12:26:56.968472    8626 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:26:56.968477    8626 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:26:56.968531    8626 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/kubernetes-upgrade-490000/config.json ...
	I0731 12:26:56.968914    8626 start.go:360] acquireMachinesLock for kubernetes-upgrade-490000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:56.968949    8626 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "kubernetes-upgrade-490000"
	I0731 12:26:56.968958    8626 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:26:56.968963    8626 fix.go:54] fixHost starting: 
	I0731 12:26:56.969087    8626 fix.go:112] recreateIfNeeded on kubernetes-upgrade-490000: state=Stopped err=<nil>
	W0731 12:26:56.969096    8626 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:26:56.972792    8626 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-490000" ...
	I0731 12:26:56.979423    8626 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:56.979457    8626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4d:fc:b5:da:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:26:56.981382    8626 main.go:141] libmachine: STDOUT: 
	I0731 12:26:56.981401    8626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:26:56.981431    8626 fix.go:56] duration metric: took 12.467ms for fixHost
	I0731 12:26:56.981436    8626 start.go:83] releasing machines lock for "kubernetes-upgrade-490000", held for 12.483167ms
	W0731 12:26:56.981443    8626 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:26:56.981478    8626 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:26:56.981483    8626 start.go:729] Will try again in 5 seconds ...
	I0731 12:27:01.981708    8626 start.go:360] acquireMachinesLock for kubernetes-upgrade-490000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:27:01.982203    8626 start.go:364] duration metric: took 387.375µs to acquireMachinesLock for "kubernetes-upgrade-490000"
	I0731 12:27:01.982346    8626 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:27:01.982368    8626 fix.go:54] fixHost starting: 
	I0731 12:27:01.983087    8626 fix.go:112] recreateIfNeeded on kubernetes-upgrade-490000: state=Stopped err=<nil>
	W0731 12:27:01.983126    8626 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:27:01.992874    8626 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-490000" ...
	I0731 12:27:01.997772    8626 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:27:01.998081    8626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4d:fc:b5:da:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubernetes-upgrade-490000/disk.qcow2
	I0731 12:27:02.007411    8626 main.go:141] libmachine: STDOUT: 
	I0731 12:27:02.007496    8626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:27:02.007594    8626 fix.go:56] duration metric: took 25.225875ms for fixHost
	I0731 12:27:02.007625    8626 start.go:83] releasing machines lock for "kubernetes-upgrade-490000", held for 25.397208ms
	W0731 12:27:02.007877    8626 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-490000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-490000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:27:02.016765    8626 out.go:177] 
	W0731 12:27:02.021064    8626 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:27:02.021159    8626 out.go:239] * 
	* 
	W0731 12:27:02.023862    8626 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:27:02.033806    8626 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-490000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-490000 version --output=json: exit status 1 (64.875708ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-490000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-31 12:27:02.110262 -0700 PDT m=+738.964555668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-490000 -n kubernetes-upgrade-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-490000 -n kubernetes-upgrade-490000: exit status 7 (32.881833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-490000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-490000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-490000
--- FAIL: TestKubernetesUpgrade (17.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (585.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4280730678 start -p stopped-upgrade-443000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4280730678 start -p stopped-upgrade-443000 --memory=2200 --vm-driver=qemu2 : (50.338357792s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4280730678 -p stopped-upgrade-443000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4280730678 -p stopped-upgrade-443000 stop: (12.094526209s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-443000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-443000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.038821708s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-443000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-443000" primary control-plane node in "stopped-upgrade-443000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-443000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:27:56.181905    8672 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:27:56.182074    8672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:27:56.182080    8672 out.go:304] Setting ErrFile to fd 2...
	I0731 12:27:56.182083    8672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:27:56.182259    8672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:27:56.183325    8672 out.go:298] Setting JSON to false
	I0731 12:27:56.201307    8672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5245,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:27:56.201387    8672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:27:56.206501    8672 out.go:177] * [stopped-upgrade-443000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:27:56.214476    8672 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:27:56.214526    8672 notify.go:220] Checking for updates...
	I0731 12:27:56.221432    8672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:27:56.225437    8672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:27:56.228449    8672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:27:56.231389    8672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:27:56.234490    8672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:27:56.237691    8672 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:27:56.241368    8672 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:27:56.244464    8672 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:27:56.247388    8672 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:27:56.254405    8672 start.go:297] selected driver: qemu2
	I0731 12:27:56.254412    8672 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51245 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:27:56.254464    8672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:27:56.257381    8672 cni.go:84] Creating CNI manager for ""
	I0731 12:27:56.257403    8672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:27:56.257430    8672 start.go:340] cluster config:
	{Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51245 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:27:56.257497    8672 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:56.260466    8672 out.go:177] * Starting "stopped-upgrade-443000" primary control-plane node in "stopped-upgrade-443000" cluster
	I0731 12:27:56.268386    8672 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:27:56.268412    8672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:27:56.268420    8672 cache.go:56] Caching tarball of preloaded images
	I0731 12:27:56.268479    8672 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:27:56.268486    8672 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:27:56.268549    8672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/config.json ...
	I0731 12:27:56.268841    8672 start.go:360] acquireMachinesLock for stopped-upgrade-443000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:27:56.268873    8672 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "stopped-upgrade-443000"
	I0731 12:27:56.268882    8672 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:27:56.268888    8672 fix.go:54] fixHost starting: 
	I0731 12:27:56.268997    8672 fix.go:112] recreateIfNeeded on stopped-upgrade-443000: state=Stopped err=<nil>
	W0731 12:27:56.269006    8672 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:27:56.279485    8672 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-443000" ...
	I0731 12:27:56.283458    8672 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:27:56.283575    8672 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51213-:22,hostfwd=tcp::51214-:2376,hostname=stopped-upgrade-443000 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/disk.qcow2
	I0731 12:27:56.331186    8672 main.go:141] libmachine: STDOUT: 
	I0731 12:27:56.331216    8672 main.go:141] libmachine: STDERR: 
	I0731 12:27:56.331222    8672 main.go:141] libmachine: Waiting for VM to start (ssh -p 51213 docker@127.0.0.1)...
	I0731 12:28:17.179825    8672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/config.json ...
	I0731 12:28:17.180053    8672 machine.go:94] provisionDockerMachine start ...
	I0731 12:28:17.180095    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.180239    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.180243    8672 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:28:17.249233    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 12:28:17.249249    8672 buildroot.go:166] provisioning hostname "stopped-upgrade-443000"
	I0731 12:28:17.249328    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.249452    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.249457    8672 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-443000 && echo "stopped-upgrade-443000" | sudo tee /etc/hostname
	I0731 12:28:17.321340    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-443000
	
	I0731 12:28:17.321408    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.321534    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.321542    8672 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-443000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-443000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-443000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:28:17.394025    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:28:17.394041    8672 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19360-6578/.minikube CaCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19360-6578/.minikube}
	I0731 12:28:17.394050    8672 buildroot.go:174] setting up certificates
	I0731 12:28:17.394056    8672 provision.go:84] configureAuth start
	I0731 12:28:17.394065    8672 provision.go:143] copyHostCerts
	I0731 12:28:17.394159    8672 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem, removing ...
	I0731 12:28:17.394166    8672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem
	I0731 12:28:17.394270    8672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.pem (1078 bytes)
	I0731 12:28:17.394443    8672 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem, removing ...
	I0731 12:28:17.394447    8672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem
	I0731 12:28:17.394495    8672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/cert.pem (1123 bytes)
	I0731 12:28:17.394597    8672 exec_runner.go:144] found /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem, removing ...
	I0731 12:28:17.394602    8672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem
	I0731 12:28:17.394646    8672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19360-6578/.minikube/key.pem (1679 bytes)
	I0731 12:28:17.394722    8672 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-443000 san=[127.0.0.1 localhost minikube stopped-upgrade-443000]
	I0731 12:28:17.485777    8672 provision.go:177] copyRemoteCerts
	I0731 12:28:17.485829    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:28:17.485838    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:28:17.523006    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 12:28:17.530510    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:28:17.536946    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:28:17.542916    8672 provision.go:87] duration metric: took 148.859125ms to configureAuth
	I0731 12:28:17.542924    8672 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:28:17.543024    8672 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:28:17.543071    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.543154    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.543163    8672 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:28:17.612032    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:28:17.612046    8672 buildroot.go:70] root file system type: tmpfs
	I0731 12:28:17.612101    8672 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:28:17.612154    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.612268    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.612301    8672 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:28:17.684357    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:28:17.684415    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:17.684541    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:17.684549    8672 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:28:18.054801    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 12:28:18.054814    8672 machine.go:97] duration metric: took 874.7695ms to provisionDockerMachine
	I0731 12:28:18.054821    8672 start.go:293] postStartSetup for "stopped-upgrade-443000" (driver="qemu2")
	I0731 12:28:18.054828    8672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:28:18.054896    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:28:18.054910    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:28:18.091783    8672 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:28:18.093257    8672 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:28:18.093267    8672 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/addons for local assets ...
	I0731 12:28:18.093354    8672 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19360-6578/.minikube/files for local assets ...
	I0731 12:28:18.093471    8672 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem -> 70682.pem in /etc/ssl/certs
	I0731 12:28:18.093604    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:28:18.096319    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:18.103027    8672 start.go:296] duration metric: took 48.201958ms for postStartSetup
	I0731 12:28:18.103039    8672 fix.go:56] duration metric: took 21.834501875s for fixHost
	I0731 12:28:18.103068    8672 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:18.103170    8672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fa6a10] 0x100fa9270 <nil>  [] 0s} localhost 51213 <nil> <nil>}
	I0731 12:28:18.103175    8672 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 12:28:18.173855    8672 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454098.070024296
	
	I0731 12:28:18.173867    8672 fix.go:216] guest clock: 1722454098.070024296
	I0731 12:28:18.173871    8672 fix.go:229] Guest: 2024-07-31 12:28:18.070024296 -0700 PDT Remote: 2024-07-31 12:28:18.103041 -0700 PDT m=+21.947280293 (delta=-33.016704ms)
	I0731 12:28:18.173883    8672 fix.go:200] guest clock delta is within tolerance: -33.016704ms
	I0731 12:28:18.173885    8672 start.go:83] releasing machines lock for "stopped-upgrade-443000", held for 21.905357291s
	I0731 12:28:18.173954    8672 ssh_runner.go:195] Run: cat /version.json
	I0731 12:28:18.173963    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:28:18.173992    8672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:28:18.174013    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	W0731 12:28:18.174725    8672 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51213: connect: connection refused
	I0731 12:28:18.174760    8672 retry.go:31] will retry after 185.307583ms: dial tcp [::1]:51213: connect: connection refused
	W0731 12:28:18.212206    8672 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:28:18.212293    8672 ssh_runner.go:195] Run: systemctl --version
	I0731 12:28:18.214276    8672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:28:18.215960    8672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:28:18.215999    8672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:28:18.219020    8672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:28:18.223999    8672 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:28:18.224013    8672 start.go:495] detecting cgroup driver to use...
	I0731 12:28:18.224129    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:18.231812    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:28:18.235472    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:28:18.239161    8672 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:28:18.239214    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:28:18.243214    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:18.247207    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:28:18.250748    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:28:18.254496    8672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:28:18.258776    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:28:18.262321    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:28:18.265378    8672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:28:18.268097    8672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:28:18.271525    8672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:28:18.275118    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:18.347392    8672 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:28:18.359739    8672 start.go:495] detecting cgroup driver to use...
	I0731 12:28:18.359810    8672 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:28:18.377769    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:18.385201    8672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:28:18.394561    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:18.402672    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:28:18.444596    8672 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 12:28:18.494803    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:28:18.500405    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:18.507700    8672 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:28:18.509421    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:28:18.512478    8672 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:28:18.519171    8672 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:28:18.597899    8672 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:28:18.675428    8672 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:28:18.675492    8672 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:28:18.681187    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:18.754368    8672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:19.880444    8672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1260745s)
	I0731 12:28:19.880504    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:28:19.885374    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:19.890399    8672 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:28:19.959097    8672 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:28:20.019480    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:20.103252    8672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:28:20.109650    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:28:20.113932    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:20.181598    8672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:28:20.226151    8672 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:28:20.226233    8672 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:28:20.228914    8672 start.go:563] Will wait 60s for crictl version
	I0731 12:28:20.228975    8672 ssh_runner.go:195] Run: which crictl
	I0731 12:28:20.230482    8672 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:28:20.246201    8672 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:28:20.246277    8672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:20.268314    8672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:28:20.292074    8672 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:28:20.292143    8672 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:28:20.293691    8672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:28:20.297767    8672 kubeadm.go:883] updating cluster {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51245 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:28:20.297821    8672 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:28:20.297870    8672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:20.309446    8672 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:20.309457    8672 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:20.309509    8672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:20.312771    8672 ssh_runner.go:195] Run: which lz4
	I0731 12:28:20.314431    8672 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 12:28:20.315824    8672 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:28:20.315842    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:28:21.239486    8672 docker.go:649] duration metric: took 925.106416ms to copy over tarball
	I0731 12:28:21.239550    8672 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:28:22.412347    8672 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17279975s)
	I0731 12:28:22.412361    8672 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:28:22.429101    8672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:28:22.432610    8672 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:28:22.438176    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:22.513881    8672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:28:24.213185    8672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.699313333s)
	I0731 12:28:24.213289    8672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:28:24.224978    8672 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:28:24.224986    8672 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:28:24.224991    8672 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:28:24.228868    8672 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.230803    8672 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.232576    8672 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.232605    8672 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.235111    8672 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.235131    8672 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.237361    8672 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:28:24.237361    8672 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.239418    8672 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.239618    8672 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.241504    8672 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:28:24.241686    8672 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.242864    8672 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.243058    8672 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.244465    8672 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.245350    8672 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.632976    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.644571    8672 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:28:24.644610    8672 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.644660    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:28:24.655818    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0731 12:28:24.678444    8672 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:24.678569    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.679421    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:28:24.681628    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.684938    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.690963    8672 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:28:24.690984    8672 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.691029    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:28:24.695870    8672 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:28:24.695891    8672 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:28:24.695937    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:28:24.702721    8672 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:28:24.702742    8672 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.702788    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:28:24.713723    8672 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:28:24.713744    8672 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.713786    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:28:24.718884    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:28:24.719002    8672 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:24.722473    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:28:24.722541    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:28:24.722574    8672 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 12:28:24.732092    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:28:24.732117    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:28:24.732128    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:28:24.732127    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:28:24.732160    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:28:24.732311    8672 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:24.738966    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.741624    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.743762    8672 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:28:24.743774    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 12:28:24.744141    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:28:24.744167    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:28:24.796725    8672 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:28:24.796756    8672 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.796787    8672 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:28:24.796820    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:28:24.796844    8672 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.796871    8672 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:28:24.814212    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0731 12:28:24.836241    8672 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:28:24.836363    8672 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.879934    8672 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:28:24.879974    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:28:24.880039    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:28:24.880293    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:28:24.903862    8672 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:28:24.903898    8672 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:24.903969    8672 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:28:25.020686    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:28:25.020739    8672 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:28:25.020867    8672 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:28:25.028871    8672 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 12:28:25.028905    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 12:28:25.111852    8672 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:28:25.111867    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 12:28:25.434944    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 12:28:25.434968    8672 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:28:25.434976    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:28:25.588692    8672 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:28:25.588732    8672 cache_images.go:92] duration metric: took 1.363755541s to LoadCachedImages
	W0731 12:28:25.588777    8672 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0731 12:28:25.588783    8672 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:28:25.588856    8672 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-443000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:28:25.588922    8672 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:28:25.603413    8672 cni.go:84] Creating CNI manager for ""
	I0731 12:28:25.603424    8672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:28:25.603430    8672 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:28:25.603438    8672 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-443000 NodeName:stopped-upgrade-443000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:28:25.603500    8672 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-443000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:28:25.603554    8672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:28:25.607025    8672 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:28:25.607056    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:28:25.610305    8672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:28:25.615415    8672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:28:25.620552    8672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:28:25.626418    8672 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:28:25.627763    8672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:28:25.631521    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:25.703161    8672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:28:25.709183    8672 certs.go:68] Setting up /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000 for IP: 10.0.2.15
	I0731 12:28:25.709193    8672 certs.go:194] generating shared ca certs ...
	I0731 12:28:25.709203    8672 certs.go:226] acquiring lock for ca certs: {Name:mk2e60bc5d1dd01990778560005f880e3d93cfec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.709491    8672 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key
	I0731 12:28:25.709547    8672 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key
	I0731 12:28:25.709552    8672 certs.go:256] generating profile certs ...
	I0731 12:28:25.709637    8672 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.key
	I0731 12:28:25.709653    8672 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4
	I0731 12:28:25.709665    8672 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:28:25.773805    8672 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 ...
	I0731 12:28:25.773817    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4: {Name:mk4622d7feb6c59e775b77a6d0024e035ded3ead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.774160    8672 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 ...
	I0731 12:28:25.774165    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4: {Name:mk3e19b2276c5e5d3fd8c2bfa1bf3463fca3b07f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.774299    8672 certs.go:381] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt
	I0731 12:28:25.774432    8672 certs.go:385] copying /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key
	I0731 12:28:25.774582    8672 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/proxy-client.key
	I0731 12:28:25.774717    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem (1338 bytes)
	W0731 12:28:25.774746    8672 certs.go:480] ignoring /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068_empty.pem, impossibly tiny 0 bytes
	I0731 12:28:25.774751    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 12:28:25.774775    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem (1078 bytes)
	I0731 12:28:25.774795    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:28:25.774812    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/key.pem (1679 bytes)
	I0731 12:28:25.774850    8672 certs.go:484] found cert: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem (1708 bytes)
	I0731 12:28:25.775203    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:28:25.782069    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:28:25.788850    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:28:25.795735    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:28:25.803052    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:28:25.810441    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:28:25.817358    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:28:25.824276    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 12:28:25.831257    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/ssl/certs/70682.pem --> /usr/share/ca-certificates/70682.pem (1708 bytes)
	I0731 12:28:25.839025    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:28:25.846669    8672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/7068.pem --> /usr/share/ca-certificates/7068.pem (1338 bytes)
	I0731 12:28:25.854905    8672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:28:25.861050    8672 ssh_runner.go:195] Run: openssl version
	I0731 12:28:25.863176    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70682.pem && ln -fs /usr/share/ca-certificates/70682.pem /etc/ssl/certs/70682.pem"
	I0731 12:28:25.867116    8672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70682.pem
	I0731 12:28:25.868909    8672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:16 /usr/share/ca-certificates/70682.pem
	I0731 12:28:25.868938    8672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70682.pem
	I0731 12:28:25.870880    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70682.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:28:25.874734    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:28:25.878457    8672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:25.880332    8672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:27 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:25.880416    8672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:28:25.882450    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:28:25.886298    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7068.pem && ln -fs /usr/share/ca-certificates/7068.pem /etc/ssl/certs/7068.pem"
	I0731 12:28:25.889503    8672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7068.pem
	I0731 12:28:25.891120    8672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:16 /usr/share/ca-certificates/7068.pem
	I0731 12:28:25.891148    8672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7068.pem
	I0731 12:28:25.893522    8672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7068.pem /etc/ssl/certs/51391683.0"
	I0731 12:28:25.896912    8672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:28:25.898628    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:28:25.901096    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:28:25.903243    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:28:25.905462    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:28:25.907636    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:28:25.909986    8672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:28:25.912105    8672 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51245 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:28:25.912184    8672 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:25.924026    8672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:28:25.927887    8672 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:28:25.927896    8672 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:28:25.927940    8672 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:28:25.931447    8672 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:28:25.931492    8672 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-443000" does not appear in /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:28:25.931508    8672 kubeconfig.go:62] /Users/jenkins/minikube-integration/19360-6578/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-443000" cluster setting kubeconfig missing "stopped-upgrade-443000" context setting]
	I0731 12:28:25.931693    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:28:25.932333    8672 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:28:25.933223    8672 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:28:25.936571    8672 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-443000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:28:25.936582    8672 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:28:25.936638    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:28:25.948928    8672 docker.go:483] Stopping containers: [bc8f9494b72e 681b91b46f8a d36958118793 c9212cfe387a 420f9dcb4cd0 a607e0e22226 dd7327a89049 575c86423b5f]
	I0731 12:28:25.949003    8672 ssh_runner.go:195] Run: docker stop bc8f9494b72e 681b91b46f8a d36958118793 c9212cfe387a 420f9dcb4cd0 a607e0e22226 dd7327a89049 575c86423b5f
	I0731 12:28:25.961710    8672 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:28:25.967464    8672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:28:25.971008    8672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:28:25.971017    8672 kubeadm.go:157] found existing configuration files:
	
	I0731 12:28:25.971056    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf
	I0731 12:28:25.974143    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:28:25.974180    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:28:25.976934    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf
	I0731 12:28:25.979629    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:28:25.979668    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:28:25.982971    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf
	I0731 12:28:25.986115    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:28:25.986166    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:28:25.989379    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf
	I0731 12:28:25.992149    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:28:25.992206    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:28:25.995594    8672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:28:25.998815    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.025332    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.390636    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.501263    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.529900    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:28:26.552983    8672 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:28:26.553060    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:27.055224    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:27.555096    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:28:27.560930    8672 api_server.go:72] duration metric: took 1.007963584s to wait for apiserver process to appear ...
	I0731 12:28:27.560939    8672 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:28:27.560948    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:32.562963    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:32.562990    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:37.563508    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:37.563572    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:42.563832    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:42.563909    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:47.564358    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:47.564417    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:52.565113    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:52.565162    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:57.566778    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:57.566886    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:02.568192    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:02.568237    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:07.569350    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:07.569385    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:12.569748    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:12.569796    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:17.571718    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:17.571755    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:22.572786    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:22.572847    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:27.574614    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:27.574743    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:27.593336    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:27.593431    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:27.607065    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:27.607138    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:27.618436    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:27.618501    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:27.629355    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:27.629426    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:27.639507    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:27.639573    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:27.649510    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:27.649576    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:27.666032    8672 logs.go:276] 0 containers: []
	W0731 12:29:27.666045    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:27.666100    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:27.676733    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:27.676752    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:27.676758    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:27.691420    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:27.691431    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:27.703449    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:27.703461    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:27.715374    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:27.715387    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:27.732735    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:27.732746    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:27.746386    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:27.746401    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:27.761248    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:27.761259    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:27.775438    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:27.775451    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:27.787594    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:27.787607    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:27.798586    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:27.798596    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:27.838343    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:27.838352    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:27.843225    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:27.843232    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:27.856186    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:27.856196    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:27.876179    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:27.876189    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:27.887407    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:27.887421    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:27.912456    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:27.912463    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:28.011385    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:28.011396    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:30.555064    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:35.557343    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:35.557503    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:35.576263    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:35.576370    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:35.590908    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:35.590982    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:35.607975    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:35.608057    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:35.620570    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:35.620644    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:35.634125    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:35.634195    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:35.644752    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:35.644823    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:35.654540    8672 logs.go:276] 0 containers: []
	W0731 12:29:35.654552    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:35.654602    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:35.664851    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:35.664871    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:35.664877    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:35.676309    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:35.676321    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:35.702581    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:35.702588    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:35.740997    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:35.741004    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:35.745362    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:35.745371    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:35.759971    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:35.759982    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:35.773372    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:35.773381    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:35.784653    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:35.784661    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:35.795530    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:35.795542    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:35.831470    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:35.831485    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:35.871710    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:35.871722    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:35.883353    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:35.883364    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:35.898815    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:35.898826    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:35.911730    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:35.911742    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:35.926329    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:35.926339    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:35.938891    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:35.938905    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:35.953701    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:35.953709    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:38.473151    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:43.475526    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:43.475840    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:43.504249    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:43.504378    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:43.522520    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:43.522625    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:43.537660    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:43.537735    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:43.549139    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:43.549201    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:43.564441    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:43.564514    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:43.575043    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:43.575114    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:43.585241    8672 logs.go:276] 0 containers: []
	W0731 12:29:43.585255    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:43.585315    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:43.596372    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:43.596391    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:43.596397    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:43.607608    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:43.607641    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:43.618710    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:43.618720    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:43.642856    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:43.642863    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:43.660982    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:43.660993    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:43.672176    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:43.672186    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:43.707218    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:43.707232    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:43.725013    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:43.725026    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:43.738858    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:43.738870    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:43.750817    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:43.750830    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:43.788016    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:43.788025    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:43.798577    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:43.798587    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:43.813199    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:43.813210    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:43.828245    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:43.828259    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:43.868236    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:43.868245    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:43.872756    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:43.872762    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:43.889038    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:43.889048    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:46.409650    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:51.411930    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:51.412137    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:51.435581    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:51.435683    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:51.452111    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:51.452203    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:51.464636    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:51.464696    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:51.475551    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:51.475630    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:51.485928    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:51.486000    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:51.500043    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:51.500112    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:51.510644    8672 logs.go:276] 0 containers: []
	W0731 12:29:51.510654    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:51.510710    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:51.521583    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:51.521599    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:51.521604    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:51.538308    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:51.538322    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:51.565001    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:51.565015    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:51.600775    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:51.600789    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:51.616227    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:51.616240    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:51.628503    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:51.628515    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:51.639646    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:51.639661    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:51.651367    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:51.651377    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:51.666096    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:51.666106    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:51.682818    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:51.682829    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:29:51.694147    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:51.694157    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:51.707668    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:51.707678    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:51.724929    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:51.724940    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:51.761390    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:51.761397    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:51.765307    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:51.765312    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:51.779024    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:51.779033    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:51.822176    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:51.822186    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:54.336016    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:59.338229    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:59.338435    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:59.352368    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:29:59.352451    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:59.363845    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:29:59.363918    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:59.374248    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:29:59.374313    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:59.384446    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:29:59.384521    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:59.394563    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:29:59.394627    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:59.404815    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:29:59.404883    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:59.415053    8672 logs.go:276] 0 containers: []
	W0731 12:29:59.415064    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:59.415116    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:59.425759    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:29:59.425774    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:59.425781    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:59.463723    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:29:59.463736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:29:59.502033    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:29:59.502043    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:29:59.514270    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:29:59.514280    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:59.531815    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:59.531827    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:59.535947    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:29:59.535955    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:29:59.550435    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:29:59.550444    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:29:59.564697    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:29:59.564709    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:29:59.581987    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:59.581998    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:59.618840    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:29:59.618850    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:29:59.633267    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:29:59.633281    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:29:59.651778    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:29:59.651791    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:29:59.664888    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:59.664898    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:59.688316    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:29:59.688322    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:29:59.699966    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:29:59.699976    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:29:59.714965    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:29:59.714976    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:29:59.726866    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:29:59.726877    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:02.240172    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:07.242351    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:07.242564    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:07.259938    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:07.260028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:07.273614    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:07.273679    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:07.284410    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:07.284500    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:07.294375    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:07.294441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:07.304942    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:07.305008    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:07.316389    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:07.316451    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:07.326632    8672 logs.go:276] 0 containers: []
	W0731 12:30:07.326641    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:07.326690    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:07.337402    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:07.337416    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:07.337421    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:07.363292    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:07.363303    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:07.403702    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:07.403716    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:07.407891    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:07.407897    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:07.421652    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:07.421669    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:07.459462    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:07.459472    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:07.470860    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:07.470872    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:07.482813    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:07.482824    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:07.497243    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:07.497253    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:07.509297    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:07.509307    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:07.524439    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:07.524450    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:07.535725    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:07.535736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:07.550457    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:07.550466    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:07.586772    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:07.586782    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:07.600703    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:07.600713    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:07.612656    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:07.612667    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:07.630197    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:07.630212    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:10.144496    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:15.146390    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:15.146808    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:15.179007    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:15.179137    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:15.198576    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:15.198671    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:15.213511    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:15.213576    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:15.231749    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:15.231807    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:15.242522    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:15.242579    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:15.253599    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:15.253671    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:15.263621    8672 logs.go:276] 0 containers: []
	W0731 12:30:15.263631    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:15.263679    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:15.273980    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:15.273996    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:15.274002    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:15.312278    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:15.312293    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:15.349805    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:15.349816    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:15.375198    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:15.375210    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:15.387020    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:15.387030    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:15.405068    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:15.405082    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:15.416459    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:15.416470    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:15.428073    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:15.428086    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:15.439565    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:15.439575    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:15.459823    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:15.459834    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:15.493832    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:15.493848    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:15.508579    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:15.508590    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:15.523001    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:15.523017    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:15.534278    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:15.534290    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:15.559142    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:15.559151    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:15.563236    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:15.563242    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:15.577698    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:15.577712    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:18.094673    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:23.096993    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:23.097106    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:23.109281    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:23.109353    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:23.125701    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:23.125770    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:23.135987    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:23.136049    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:23.147048    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:23.147118    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:23.157578    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:23.157644    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:23.168063    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:23.168128    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:23.178257    8672 logs.go:276] 0 containers: []
	W0731 12:30:23.178269    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:23.178320    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:23.191866    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:23.191884    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:23.191891    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:23.228955    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:23.228966    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:23.247239    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:23.247250    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:23.258648    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:23.258657    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:23.274835    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:23.274845    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:23.285999    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:23.286010    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:23.290546    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:23.290556    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:23.306477    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:23.306487    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:23.321230    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:23.321240    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:23.338698    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:23.338710    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:23.350426    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:23.350436    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:23.367946    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:23.367958    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:23.379408    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:23.379419    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:23.391198    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:23.391212    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:23.402625    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:23.402633    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:23.439317    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:23.439329    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:23.477738    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:23.477748    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:26.002312    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:31.004604    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:31.004803    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:31.023884    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:31.023980    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:31.038160    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:31.038239    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:31.050499    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:31.050571    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:31.061384    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:31.061453    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:31.072265    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:31.072337    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:31.082643    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:31.082715    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:31.092921    8672 logs.go:276] 0 containers: []
	W0731 12:30:31.092933    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:31.092994    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:31.103625    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:31.103642    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:31.103649    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:31.145554    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:31.145566    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:31.159893    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:31.159905    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:31.174699    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:31.174711    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:31.186419    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:31.186430    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:31.198063    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:31.198074    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:31.236177    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:31.236186    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:31.251933    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:31.251944    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:31.269051    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:31.269061    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:31.280134    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:31.280144    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:31.317216    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:31.317226    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:31.321730    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:31.321737    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:31.332848    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:31.332860    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:31.344908    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:31.344921    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:31.370581    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:31.370591    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:31.385016    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:31.385031    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:31.403663    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:31.403675    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:33.917727    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:38.919860    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:38.920044    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:38.940722    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:38.940811    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:38.953461    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:38.953535    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:38.964615    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:38.964687    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:38.975919    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:38.975992    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:38.986261    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:38.986330    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:38.996590    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:38.996658    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:39.008116    8672 logs.go:276] 0 containers: []
	W0731 12:30:39.008127    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:39.008181    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:39.018506    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:39.018525    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:39.018531    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:39.033475    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:39.033487    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:39.051351    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:39.051360    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:39.062906    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:39.062918    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:39.086480    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:39.086488    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:39.097842    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:39.097851    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:39.111289    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:39.111300    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:39.148427    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:39.148436    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:39.159986    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:39.159997    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:39.171703    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:39.171716    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:39.175960    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:39.175967    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:39.212416    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:39.212431    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:39.227254    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:39.227264    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:39.238803    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:39.238812    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:39.277005    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:39.277015    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:39.294832    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:39.294844    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:39.309456    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:39.309467    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:41.825063    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:46.827336    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:46.827537    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:46.845587    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:46.845682    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:46.860025    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:46.860099    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:46.871308    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:46.871383    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:46.881549    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:46.881618    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:46.891614    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:46.891679    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:46.902009    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:46.902079    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:46.911788    8672 logs.go:276] 0 containers: []
	W0731 12:30:46.911797    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:46.911852    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:46.922290    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:46.922309    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:46.922315    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:46.936681    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:46.936692    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:46.973389    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:46.973400    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:46.988277    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:46.988287    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:47.005843    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:47.005853    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:47.020122    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:47.020135    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:47.032162    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:47.032172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:47.043994    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:47.044005    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:47.048022    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:47.048029    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:47.059950    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:47.059965    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:47.074324    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:47.074334    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:47.088725    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:47.088736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:47.100058    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:47.100069    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:47.124226    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:47.124233    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:47.158469    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:47.158481    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:47.174743    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:47.174753    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:47.191052    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:47.191062    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:49.732684    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:54.734934    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:54.735163    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:54.752617    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:30:54.752708    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:54.765997    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:30:54.766063    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:54.777279    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:30:54.777346    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:54.787894    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:30:54.787966    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:54.798594    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:30:54.798657    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:54.809591    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:30:54.809662    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:54.828133    8672 logs.go:276] 0 containers: []
	W0731 12:30:54.828143    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:54.828204    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:54.838067    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:30:54.838083    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:54.838089    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:54.875571    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:54.875584    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:54.909575    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:30:54.909586    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:30:54.921393    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:30:54.921405    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:30:54.932737    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:30:54.932748    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:30:54.943888    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:54.943900    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:54.967624    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:30:54.967631    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:54.979503    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:30:54.979519    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:30:54.993814    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:30:54.993824    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:30:55.011947    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:30:55.011959    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:30:55.025804    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:30:55.025814    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:30:55.037825    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:30:55.037836    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:30:55.052572    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:30:55.052581    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:30:55.067846    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:55.067857    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:55.072231    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:30:55.072237    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:30:55.109060    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:30:55.109073    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:30:55.129724    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:30:55.129736    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:30:57.643856    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:02.646041    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:02.646261    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:02.671412    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:02.671531    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:02.688275    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:02.688359    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:02.702130    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:02.702191    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:02.713397    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:02.713470    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:02.727677    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:02.727743    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:02.741717    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:02.741783    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:02.755591    8672 logs.go:276] 0 containers: []
	W0731 12:31:02.755605    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:02.755664    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:02.766225    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:02.766240    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:02.766245    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:02.800231    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:02.800242    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:02.816315    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:02.816327    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:02.830396    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:02.830422    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:02.844504    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:02.844516    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:02.859192    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:02.859205    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:02.871392    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:02.871403    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:02.883289    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:02.883301    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:02.921295    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:02.921304    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:02.932428    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:02.932438    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:02.944514    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:02.944523    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:02.960979    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:02.960993    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:02.972565    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:02.972575    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:03.010865    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:03.010872    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:03.015233    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:03.015239    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:03.032597    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:03.032607    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:03.044205    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:03.044219    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:05.569099    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:10.571202    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:10.571322    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:10.587425    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:10.587503    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:10.597940    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:10.598011    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:10.608810    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:10.608882    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:10.619104    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:10.619178    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:10.629691    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:10.629756    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:10.639890    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:10.639960    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:10.651087    8672 logs.go:276] 0 containers: []
	W0731 12:31:10.651098    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:10.651159    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:10.661447    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:10.661469    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:10.661478    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:10.701140    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:10.701151    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:10.737008    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:10.737024    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:10.748945    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:10.748957    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:10.767998    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:10.768010    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:10.782994    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:10.783004    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:10.801620    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:10.801631    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:10.824470    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:10.824478    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:10.835831    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:10.835840    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:10.847266    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:10.847279    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:10.859417    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:10.859428    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:10.896002    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:10.896012    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:10.920591    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:10.920604    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:10.924855    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:10.924864    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:10.938785    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:10.938797    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:10.979695    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:10.979706    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:10.994159    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:10.994168    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:13.506830    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:18.509029    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:18.509230    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:18.525906    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:18.525988    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:18.537423    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:18.537493    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:18.547846    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:18.547919    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:18.558299    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:18.558371    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:18.568643    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:18.568720    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:18.579108    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:18.579174    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:18.589503    8672 logs.go:276] 0 containers: []
	W0731 12:31:18.589513    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:18.589570    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:18.599861    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:18.599877    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:18.599883    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:18.613807    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:18.613819    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:18.650620    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:18.650635    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:18.662336    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:18.662346    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:18.676767    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:18.676780    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:18.701240    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:18.701246    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:18.713888    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:18.713898    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:18.718174    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:18.718185    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:18.732539    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:18.732550    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:18.744143    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:18.744154    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:18.759457    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:18.759466    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:18.771609    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:18.771619    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:18.808294    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:18.808302    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:18.844069    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:18.844083    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:18.862597    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:18.862608    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:18.874491    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:18.874501    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:18.891487    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:18.891496    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:21.404653    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:26.406823    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:26.406969    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:26.421590    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:26.421669    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:26.435839    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:26.435913    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:26.446279    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:26.446350    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:26.456642    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:26.456716    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:26.467177    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:26.467243    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:26.477393    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:26.477461    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:26.487674    8672 logs.go:276] 0 containers: []
	W0731 12:31:26.487686    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:26.487740    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:26.497901    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:26.497918    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:26.497926    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:26.512247    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:26.512257    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:26.525417    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:26.525429    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:26.560081    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:26.560098    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:26.574621    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:26.574634    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:26.613582    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:26.613591    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:26.628435    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:26.628446    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:26.652285    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:26.652295    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:26.664599    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:26.664610    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:26.704058    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:26.704065    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:26.707910    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:26.707916    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:26.728106    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:26.728118    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:26.742782    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:26.742798    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:26.754472    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:26.754482    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:26.773471    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:26.773481    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:26.785403    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:26.785419    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:26.798881    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:26.798891    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:29.312236    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:34.314412    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:34.314563    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:34.326387    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:34.326472    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:34.338572    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:34.338650    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:34.348782    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:34.348856    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:34.359412    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:34.359492    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:34.370125    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:34.370197    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:34.383366    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:34.383441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:34.393038    8672 logs.go:276] 0 containers: []
	W0731 12:31:34.393050    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:34.393110    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:34.403722    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:34.403739    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:34.403744    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:34.443993    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:34.444015    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:34.459570    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:34.459583    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:34.471969    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:34.471983    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:34.483314    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:34.483324    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:34.521633    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:34.521644    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:34.557325    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:34.557337    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:34.569011    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:34.569022    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:34.583128    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:34.583145    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:34.600864    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:34.600873    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:34.612493    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:34.612504    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:34.627141    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:34.627152    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:34.642047    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:34.642056    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:34.646557    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:34.646563    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:34.660584    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:34.660595    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:34.673018    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:34.673028    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:34.685249    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:34.685259    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:37.210590    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:42.212759    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:42.212957    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:42.233100    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:42.233184    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:42.245360    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:42.245441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:42.260626    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:42.260700    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:42.272089    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:42.272167    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:42.282355    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:42.282427    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:42.293177    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:42.293246    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:42.304209    8672 logs.go:276] 0 containers: []
	W0731 12:31:42.304220    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:42.304280    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:42.314384    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:42.314400    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:42.314405    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:42.325374    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:42.325386    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:42.365587    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:42.365599    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:42.401209    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:42.401221    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:42.416782    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:42.416792    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:42.433906    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:42.433917    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:42.451225    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:42.451234    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:42.455263    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:42.455270    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:42.492864    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:42.492874    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:42.506779    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:42.506790    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:42.518485    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:42.518496    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:42.530818    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:42.530828    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:42.542654    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:42.542670    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:42.553924    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:42.553935    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:42.576928    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:42.576938    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:42.588404    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:42.588416    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:42.604712    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:42.604723    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:45.124796    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:50.126967    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:50.127121    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:50.156250    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:50.156344    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:50.171665    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:50.171736    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:50.182660    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:50.182724    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:50.192941    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:50.193011    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:50.203709    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:50.203776    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:50.214658    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:50.214723    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:50.225314    8672 logs.go:276] 0 containers: []
	W0731 12:31:50.225326    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:50.225386    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:50.236095    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:50.236113    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:50.236120    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:50.247811    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:50.247822    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:50.271211    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:50.271218    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:50.310135    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:50.310146    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:50.314229    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:50.314237    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:50.353512    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:50.353523    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:50.368581    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:50.368592    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:50.381301    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:50.381316    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:50.393073    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:50.393083    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:50.428890    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:50.428905    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:31:50.443276    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:50.443285    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:50.454150    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:50.454162    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:50.472861    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:50.472873    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:50.486597    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:50.486607    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:50.500887    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:50.500898    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:50.512711    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:50.512724    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:50.530012    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:50.530027    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:53.048240    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:58.050363    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:58.050563    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:58.071690    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:31:58.071792    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:58.086355    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:31:58.086434    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:58.097910    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:31:58.097979    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:58.109552    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:31:58.109617    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:58.120066    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:31:58.120123    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:58.136295    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:31:58.136402    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:58.148389    8672 logs.go:276] 0 containers: []
	W0731 12:31:58.148399    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:58.148456    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:58.159291    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:31:58.159307    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:31:58.159314    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:31:58.177160    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:31:58.177171    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:31:58.188199    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:58.188212    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:58.212141    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:31:58.212158    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:31:58.257230    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:58.257241    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:58.261653    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:58.261659    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:58.298241    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:31:58.298252    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:31:58.313097    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:31:58.313110    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:31:58.324810    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:31:58.324822    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:31:58.348185    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:58.348195    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:58.388040    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:31:58.388054    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:31:58.399802    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:31:58.399820    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:31:58.411027    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:31:58.411037    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:31:58.422641    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:31:58.422656    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:31:58.439912    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:31:58.439926    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:31:58.454884    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:31:58.454893    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:58.466303    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:31:58.466317    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:00.980197    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:05.982388    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:05.982764    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:06.023487    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:32:06.023625    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:06.044816    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:32:06.044915    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:06.060102    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:32:06.060187    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:06.079979    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:32:06.080055    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:06.090378    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:32:06.090441    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:06.101202    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:32:06.101277    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:06.111603    8672 logs.go:276] 0 containers: []
	W0731 12:32:06.111613    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:06.111669    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:06.122304    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:32:06.122321    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:32:06.122328    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:32:06.134798    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:32:06.134810    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:32:06.151035    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:32:06.151046    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:32:06.170995    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:32:06.171005    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:32:06.185810    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:32:06.185822    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:32:06.197386    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:06.197396    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:06.234267    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:32:06.234277    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:32:06.253707    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:32:06.253718    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:32:06.290021    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:32:06.290030    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:32:06.302012    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:32:06.302024    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:32:06.319231    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:32:06.319245    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:32:06.330218    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:06.330229    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:06.352250    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:06.352262    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:06.387588    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:32:06.387599    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:06.406407    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:32:06.406418    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:06.418534    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:06.418545    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:06.424568    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:32:06.424580    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:32:08.938380    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:13.940579    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:13.940862    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:13.968152    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:32:13.968289    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:13.985926    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:32:13.986018    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:13.999623    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:32:13.999694    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:14.011612    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:32:14.011683    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:14.022150    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:32:14.022214    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:14.032479    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:32:14.032557    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:14.042805    8672 logs.go:276] 0 containers: []
	W0731 12:32:14.042818    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:14.042875    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:14.053094    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:32:14.053109    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:32:14.053114    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:32:14.067554    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:14.067564    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:14.072172    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:14.072179    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:14.106860    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:32:14.106874    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:32:14.124299    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:32:14.124309    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:32:14.135779    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:14.135791    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:14.159489    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:14.159498    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:14.199076    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:32:14.199084    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:32:14.214172    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:32:14.214183    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:32:14.225341    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:32:14.225351    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:14.239001    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:32:14.239011    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:32:14.253583    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:32:14.253593    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:32:14.264331    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:32:14.264343    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:32:14.276448    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:32:14.276457    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:32:14.304110    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:32:14.304125    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:32:14.329750    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:32:14.329767    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:14.345634    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:32:14.345649    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:32:16.887394    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:21.889552    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:21.889763    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:21.906227    8672 logs.go:276] 2 containers: [bf1811f37e64 c9212cfe387a]
	I0731 12:32:21.906314    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:21.919100    8672 logs.go:276] 2 containers: [f2e06e2e4325 681b91b46f8a]
	I0731 12:32:21.919167    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:21.930021    8672 logs.go:276] 1 containers: [9ef7681dd459]
	I0731 12:32:21.930088    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:21.942256    8672 logs.go:276] 2 containers: [7233d71fb9d1 bc8f9494b72e]
	I0731 12:32:21.942330    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:21.953860    8672 logs.go:276] 1 containers: [3a1d027f24f5]
	I0731 12:32:21.953932    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:21.964120    8672 logs.go:276] 2 containers: [05bc08f9a6a8 d36958118793]
	I0731 12:32:21.964187    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:21.974366    8672 logs.go:276] 0 containers: []
	W0731 12:32:21.974375    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:21.974426    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:21.984506    8672 logs.go:276] 2 containers: [f30b185fdba1 b12804058059]
	I0731 12:32:21.984522    8672 logs.go:123] Gathering logs for storage-provisioner [f30b185fdba1] ...
	I0731 12:32:21.984528    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30b185fdba1"
	I0731 12:32:21.995906    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:32:21.995919    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:22.007714    8672 logs.go:123] Gathering logs for kube-apiserver [c9212cfe387a] ...
	I0731 12:32:22.007725    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9212cfe387a"
	I0731 12:32:22.045436    8672 logs.go:123] Gathering logs for etcd [681b91b46f8a] ...
	I0731 12:32:22.045450    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681b91b46f8a"
	I0731 12:32:22.059440    8672 logs.go:123] Gathering logs for kube-scheduler [bc8f9494b72e] ...
	I0731 12:32:22.059449    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8f9494b72e"
	I0731 12:32:22.073997    8672 logs.go:123] Gathering logs for kube-proxy [3a1d027f24f5] ...
	I0731 12:32:22.074011    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a1d027f24f5"
	I0731 12:32:22.086000    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:22.086011    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:22.120562    8672 logs.go:123] Gathering logs for kube-apiserver [bf1811f37e64] ...
	I0731 12:32:22.120573    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1811f37e64"
	I0731 12:32:22.135122    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:22.135134    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:22.140018    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:22.140027    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:22.162795    8672 logs.go:123] Gathering logs for coredns [9ef7681dd459] ...
	I0731 12:32:22.162816    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef7681dd459"
	I0731 12:32:22.176654    8672 logs.go:123] Gathering logs for kube-scheduler [7233d71fb9d1] ...
	I0731 12:32:22.176668    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7233d71fb9d1"
	I0731 12:32:22.190447    8672 logs.go:123] Gathering logs for kube-controller-manager [05bc08f9a6a8] ...
	I0731 12:32:22.190460    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bc08f9a6a8"
	I0731 12:32:22.208673    8672 logs.go:123] Gathering logs for kube-controller-manager [d36958118793] ...
	I0731 12:32:22.208685    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36958118793"
	I0731 12:32:22.223607    8672 logs.go:123] Gathering logs for storage-provisioner [b12804058059] ...
	I0731 12:32:22.223622    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12804058059"
	I0731 12:32:22.236493    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:22.236504    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:22.274771    8672 logs.go:123] Gathering logs for etcd [f2e06e2e4325] ...
	I0731 12:32:22.274779    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e06e2e4325"
	I0731 12:32:24.790090    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:29.792227    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:29.792306    8672 kubeadm.go:597] duration metric: took 4m3.8682915s to restartPrimaryControlPlane
	W0731 12:32:29.792373    8672 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:32:29.792404    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:32:30.876316    8672 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.083916417s)
	I0731 12:32:30.876394    8672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:32:30.881380    8672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:32:30.884144    8672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:32:30.886849    8672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:32:30.886855    8672 kubeadm.go:157] found existing configuration files:
	
	I0731 12:32:30.886877    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf
	I0731 12:32:30.889435    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:32:30.889458    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:32:30.891861    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf
	I0731 12:32:30.894839    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:32:30.894858    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:32:30.897678    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf
	I0731 12:32:30.900241    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:32:30.900259    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:32:30.903376    8672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf
	I0731 12:32:30.906392    8672 kubeadm.go:163] "https://control-plane.minikube.internal:51245" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51245 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:32:30.906416    8672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:32:30.909022    8672 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:32:30.927428    8672 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:32:30.927593    8672 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:32:30.979321    8672 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:32:30.979370    8672 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:32:30.979484    8672 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:32:31.030915    8672 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:32:31.035111    8672 out.go:204]   - Generating certificates and keys ...
	I0731 12:32:31.035145    8672 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:32:31.035175    8672 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:32:31.035214    8672 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:32:31.035242    8672 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:32:31.035276    8672 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:32:31.035298    8672 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:32:31.035353    8672 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:32:31.035409    8672 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:32:31.035441    8672 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:32:31.035488    8672 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:32:31.035516    8672 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:32:31.035541    8672 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:32:31.134824    8672 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:32:31.197629    8672 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:32:31.543699    8672 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:32:31.595029    8672 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:32:31.628116    8672 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:32:31.628486    8672 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:32:31.628541    8672 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:32:31.701545    8672 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:32:31.704870    8672 out.go:204]   - Booting up control plane ...
	I0731 12:32:31.704913    8672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:32:31.705806    8672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:32:31.706497    8672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:32:31.706659    8672 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:32:31.707788    8672 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:32:36.209654    8672 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501861 seconds
	I0731 12:32:36.209754    8672 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:32:36.213792    8672 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:32:36.729904    8672 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:32:36.730118    8672 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-443000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:32:37.234868    8672 kubeadm.go:310] [bootstrap-token] Using token: 6dq04j.kb1wbzf2t3iztkgl
	I0731 12:32:37.238411    8672 out.go:204]   - Configuring RBAC rules ...
	I0731 12:32:37.238475    8672 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:32:37.238527    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:32:37.242274    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:32:37.243243    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:32:37.244158    8672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:32:37.244957    8672 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:32:37.248222    8672 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:32:37.420853    8672 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:32:37.639778    8672 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:32:37.640169    8672 kubeadm.go:310] 
	I0731 12:32:37.640201    8672 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:32:37.640206    8672 kubeadm.go:310] 
	I0731 12:32:37.640238    8672 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:32:37.640243    8672 kubeadm.go:310] 
	I0731 12:32:37.640259    8672 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:32:37.640290    8672 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:32:37.640314    8672 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:32:37.640320    8672 kubeadm.go:310] 
	I0731 12:32:37.640345    8672 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:32:37.640347    8672 kubeadm.go:310] 
	I0731 12:32:37.640372    8672 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:32:37.640374    8672 kubeadm.go:310] 
	I0731 12:32:37.640399    8672 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:32:37.640436    8672 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:32:37.640497    8672 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:32:37.640500    8672 kubeadm.go:310] 
	I0731 12:32:37.640551    8672 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:32:37.640594    8672 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:32:37.640598    8672 kubeadm.go:310] 
	I0731 12:32:37.640649    8672 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dq04j.kb1wbzf2t3iztkgl \
	I0731 12:32:37.640704    8672 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f \
	I0731 12:32:37.640714    8672 kubeadm.go:310] 	--control-plane 
	I0731 12:32:37.640718    8672 kubeadm.go:310] 
	I0731 12:32:37.640757    8672 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:32:37.640759    8672 kubeadm.go:310] 
	I0731 12:32:37.640801    8672 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dq04j.kb1wbzf2t3iztkgl \
	I0731 12:32:37.640850    8672 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a2b9cdf2180d616a8a5a40b6a5d6978e3d5c2639a3267e8f365f02907ceda52f 
	I0731 12:32:37.640995    8672 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:32:37.641115    8672 cni.go:84] Creating CNI manager for ""
	I0731 12:32:37.641127    8672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:32:37.644282    8672 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:32:37.651257    8672 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:32:37.654215    8672 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:32:37.658880    8672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:32:37.658928    8672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:32:37.658959    8672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-443000 minikube.k8s.io/updated_at=2024_07_31T12_32_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=stopped-upgrade-443000 minikube.k8s.io/primary=true
	I0731 12:32:37.709902    8672 kubeadm.go:1113] duration metric: took 51.00825ms to wait for elevateKubeSystemPrivileges
	I0731 12:32:37.709949    8672 ops.go:34] apiserver oom_adj: -16
	I0731 12:32:37.710017    8672 kubeadm.go:394] duration metric: took 4m11.801928583s to StartCluster
	I0731 12:32:37.710028    8672 settings.go:142] acquiring lock: {Name:mk262cff1bf9355aa6c0530bb5de14a2847090f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:37.710184    8672 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:32:37.710552    8672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/kubeconfig: {Name:mk9fc3592e4cfdec6d1a46c77dad7fbde34fda57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:32:37.710784    8672 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:32:37.710856    8672 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:32:37.710832    8672 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:32:37.710919    8672 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-443000"
	I0731 12:32:37.710923    8672 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-443000"
	I0731 12:32:37.710931    8672 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-443000"
	I0731 12:32:37.710933    8672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-443000"
	W0731 12:32:37.710935    8672 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:32:37.710945    8672 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0731 12:32:37.712104    8672 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19360-6578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:32:37.712237    8672 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-443000"
	W0731 12:32:37.712243    8672 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:32:37.712251    8672 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0731 12:32:37.715167    8672 out.go:177] * Verifying Kubernetes components...
	I0731 12:32:37.715606    8672 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:37.719367    8672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:32:37.719374    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:32:37.725199    8672 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:32:37.729183    8672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:32:37.735173    8672 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:37.735181    8672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:32:37.735187    8672 sshutil.go:53] new ssh client: &{IP:localhost Port:51213 SSHKeyPath:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0731 12:32:37.804285    8672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:32:37.809189    8672 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:32:37.809235    8672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:32:37.813116    8672 api_server.go:72] duration metric: took 102.321875ms to wait for apiserver process to appear ...
	I0731 12:32:37.813123    8672 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:32:37.813130    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:37.822008    8672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:32:37.859175    8672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:32:42.815206    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:42.815232    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:47.815352    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:47.815386    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:52.815572    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:52.815593    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:57.815898    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:57.815949    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:02.816504    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:02.816524    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:07.817113    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:07.817156    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:33:08.201794    8672 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:33:08.206079    8672 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:33:08.220016    8672 addons.go:510] duration metric: took 30.509701666s for enable addons: enabled=[storage-provisioner]
	I0731 12:33:12.818055    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:12.818104    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:17.819174    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:17.819216    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:22.820658    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:22.820701    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:27.822450    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:27.822470    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:32.824580    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:32.824622    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:37.826733    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:37.826838    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:37.840958    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:33:37.841028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:37.851840    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:33:37.851914    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:37.862761    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:33:37.862834    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:37.873109    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:33:37.873178    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:37.883168    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:33:37.883237    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:37.893949    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:33:37.894017    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:37.904648    8672 logs.go:276] 0 containers: []
	W0731 12:33:37.904661    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:37.904723    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:37.915186    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:33:37.915203    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:37.915211    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:37.919457    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:37.919467    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:37.956863    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:33:37.956874    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:33:37.971610    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:33:37.971625    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:33:37.985392    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:33:37.985402    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:33:38.000552    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:33:38.000563    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:33:38.020815    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:33:38.020829    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:33:38.032890    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:38.032901    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:38.069639    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:33:38.069650    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:38.081065    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:38.081077    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:38.106638    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:33:38.106652    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:33:38.121712    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:33:38.121723    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:33:38.133320    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:33:38.133334    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:33:40.647075    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:45.649246    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:45.649367    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:45.662630    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:33:45.662710    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:45.673728    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:33:45.673799    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:45.684741    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:33:45.684811    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:45.694976    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:33:45.695045    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:45.705361    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:33:45.705427    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:45.715584    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:33:45.715653    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:45.725870    8672 logs.go:276] 0 containers: []
	W0731 12:33:45.725882    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:45.725942    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:45.736600    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:33:45.736613    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:33:45.736619    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:45.754143    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:33:45.754154    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:33:45.766396    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:33:45.766412    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:33:45.780913    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:33:45.780923    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:33:45.792993    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:33:45.793004    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:33:45.810914    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:33:45.810923    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:33:45.823140    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:33:45.823150    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:33:45.835200    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:45.835209    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:45.859808    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:45.859818    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:45.893273    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:45.893283    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:45.898185    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:45.898196    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:45.933575    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:33:45.933586    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:33:45.947959    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:33:45.947974    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:33:48.463628    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:53.465802    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:53.466048    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:53.490028    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:33:53.490159    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:53.510850    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:33:53.510929    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:53.523726    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:33:53.523802    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:53.534647    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:33:53.534711    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:53.544963    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:33:53.545028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:53.555318    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:33:53.555376    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:53.564919    8672 logs.go:276] 0 containers: []
	W0731 12:33:53.564931    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:53.564986    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:53.575278    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:33:53.575297    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:53.575303    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:53.579673    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:33:53.579682    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:33:53.599219    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:33:53.599230    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:33:53.610566    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:33:53.610577    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:53.621520    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:53.621530    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:53.654965    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:53.654973    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:53.689513    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:33:53.689524    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:33:53.703234    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:33:53.703244    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:33:53.719514    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:33:53.719524    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:33:53.734297    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:33:53.734307    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:33:53.745944    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:33:53.745954    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:33:53.766444    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:33:53.766454    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:33:53.778360    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:53.778372    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:56.305318    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:01.307492    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:01.307678    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:01.322315    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:01.322400    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:01.334859    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:01.334929    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:01.345561    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:01.345634    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:01.356509    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:01.356579    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:01.367000    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:01.367074    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:01.376963    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:01.377033    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:01.386753    8672 logs.go:276] 0 containers: []
	W0731 12:34:01.386765    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:01.386827    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:01.398051    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:01.398068    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:01.398074    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:01.402426    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:01.402433    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:01.440725    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:01.440738    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:01.454913    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:01.454925    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:01.466458    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:01.466467    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:01.481670    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:01.481684    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:01.493862    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:01.493873    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:01.519326    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:01.519342    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:01.554724    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:01.554735    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:01.567369    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:01.567386    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:01.579028    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:01.579040    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:01.600374    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:01.600384    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:01.612391    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:01.612403    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:04.127507    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:09.128743    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:09.128933    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:09.145773    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:09.145864    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:09.160867    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:09.160936    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:09.173261    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:09.173332    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:09.183199    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:09.183262    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:09.193753    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:09.193822    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:09.212810    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:09.212880    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:09.223057    8672 logs.go:276] 0 containers: []
	W0731 12:34:09.223067    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:09.223125    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:09.233317    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:09.233330    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:09.233334    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:09.238102    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:09.238110    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:09.274125    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:09.274140    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:09.287846    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:09.287856    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:09.302432    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:09.302442    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:09.314188    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:09.314198    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:09.332605    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:09.332615    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:09.343900    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:09.343913    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:09.377756    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:09.377767    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:09.395922    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:09.395932    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:09.407591    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:09.407602    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:09.420005    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:09.420014    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:09.443217    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:09.443224    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:11.956406    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:16.958647    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:16.958863    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:16.978827    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:16.978909    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:16.995905    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:16.995985    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:17.007245    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:17.007313    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:17.020895    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:17.020967    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:17.031624    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:17.031699    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:17.043074    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:17.043152    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:17.053048    8672 logs.go:276] 0 containers: []
	W0731 12:34:17.053063    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:17.053119    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:17.063708    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:17.063723    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:17.063728    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:17.075548    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:17.075562    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:17.087014    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:17.087024    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:17.122649    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:17.122658    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:17.157559    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:17.157571    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:17.171867    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:17.171877    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:17.183339    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:17.183351    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:17.201487    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:17.201501    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:17.226277    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:17.226284    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:17.237482    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:17.237491    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:17.242188    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:17.242194    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:17.260856    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:17.260867    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:17.275549    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:17.275559    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:19.789324    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:24.791876    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:24.792131    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:24.819835    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:24.819971    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:24.838015    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:24.838095    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:24.851756    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:24.851832    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:24.863336    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:24.863401    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:24.873903    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:24.873969    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:24.891298    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:24.891370    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:24.903870    8672 logs.go:276] 0 containers: []
	W0731 12:34:24.903883    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:24.903958    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:24.916372    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:24.916390    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:24.916395    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:24.930801    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:24.930811    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:24.944221    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:24.944231    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:24.956313    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:24.956323    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:24.970879    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:24.970889    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:24.992378    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:24.992395    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:25.018312    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:25.018326    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:25.053666    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:25.053679    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:25.067838    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:25.067849    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:25.080445    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:25.080455    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:25.092106    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:25.092116    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:25.106266    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:25.106278    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:25.110579    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:25.110585    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:27.650629    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:32.652908    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:32.653233    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:32.779002    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:32.779097    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:32.793637    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:32.793713    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:32.805757    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:32.805826    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:32.843090    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:32.843161    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:32.855770    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:32.855832    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:32.867291    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:32.867364    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:32.878958    8672 logs.go:276] 0 containers: []
	W0731 12:34:32.878971    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:32.879028    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:32.892291    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:32.892307    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:32.892312    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:32.928594    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:32.928603    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:32.967280    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:32.967290    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:32.985207    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:32.985218    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:32.997801    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:32.997813    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:33.010007    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:33.010020    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:33.035472    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:33.035482    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:33.047143    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:33.047154    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:33.051084    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:33.051090    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:33.065576    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:33.065586    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:33.077227    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:33.077236    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:33.092415    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:33.092426    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:33.104847    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:33.104857    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:36.169042    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:41.171288    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:41.171472    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:41.193808    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:41.193913    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:41.209955    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:41.210041    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:41.222421    8672 logs.go:276] 2 containers: [98a9f1546cfd 0fd228a32104]
	I0731 12:34:41.222482    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:41.233378    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:41.233448    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:41.244789    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:41.244864    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:41.255770    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:41.255836    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:41.266691    8672 logs.go:276] 0 containers: []
	W0731 12:34:41.266702    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:41.266761    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:41.277415    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:41.277428    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:41.277434    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:41.311165    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:41.311174    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:41.325679    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:41.325690    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:41.338233    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:41.338243    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:41.350070    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:41.350081    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:41.365202    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:41.365213    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:41.377085    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:41.377095    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:41.400756    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:41.400765    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:41.404874    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:41.404881    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:41.440985    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:41.440995    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:41.460219    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:41.460229    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:41.472121    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:41.472133    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:41.493373    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:41.493384    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:44.009274    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:49.010719    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:49.011134    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:49.048836    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:49.048967    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:49.072660    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:49.072757    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:49.086692    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:34:49.086772    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:49.100841    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:49.100906    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:49.115364    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:49.115440    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:49.126161    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:49.126232    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:49.136880    8672 logs.go:276] 0 containers: []
	W0731 12:34:49.136893    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:49.136952    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:49.148071    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:49.148088    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:49.148093    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:49.163099    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:49.163110    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:49.174960    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:49.174971    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:49.200089    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:49.200100    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:49.204903    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:34:49.204910    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:34:49.216523    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:49.216535    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:49.228857    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:49.228867    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:49.247593    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:49.247604    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:49.283560    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:49.283572    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:49.296163    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:49.296174    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:49.312109    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:49.312121    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:49.346855    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:49.346863    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:49.361419    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:34:49.361430    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:34:49.372774    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:49.372790    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:49.384864    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:49.384876    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:51.899726    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:56.900987    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:56.901232    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:56.930507    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:34:56.930620    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:56.953352    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:34:56.953428    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:56.966556    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:34:56.966626    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:56.977753    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:34:56.977818    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:56.988970    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:34:56.989034    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:57.000132    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:34:57.000193    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:57.014585    8672 logs.go:276] 0 containers: []
	W0731 12:34:57.014596    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:57.014657    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:57.026084    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:34:57.026103    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:57.026116    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:57.030307    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:34:57.030317    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:34:57.042575    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:34:57.042588    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:34:57.060997    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:34:57.061011    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:34:57.076612    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:57.076624    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:57.101909    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:57.101916    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:57.136107    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:57.136113    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:57.173555    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:34:57.173565    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:34:57.188547    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:34:57.188559    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:34:57.203772    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:34:57.203782    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:34:57.220121    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:34:57.220133    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:34:57.232155    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:34:57.232164    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:34:57.244238    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:34:57.244250    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:34:57.256158    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:34:57.256172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:57.268501    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:34:57.268512    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:34:59.783290    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:04.785435    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:04.785688    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:04.806163    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:04.806266    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:04.821513    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:04.821592    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:04.835542    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:04.835625    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:04.846284    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:04.846370    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:04.856795    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:04.856864    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:04.866761    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:04.866824    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:04.876417    8672 logs.go:276] 0 containers: []
	W0731 12:35:04.876429    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:04.876491    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:04.886915    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:04.886931    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:04.886936    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:04.898902    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:04.898915    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:04.910895    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:04.910910    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:04.948769    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:04.948782    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:04.962470    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:04.962484    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:04.974300    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:04.974313    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:04.989057    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:04.989068    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:05.025412    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:05.025422    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:05.039733    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:05.039741    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:05.052388    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:05.052399    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:05.063857    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:05.063868    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:05.080492    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:05.080501    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:05.105728    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:05.105735    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:05.117468    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:05.117483    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:05.121795    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:05.121802    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:07.643088    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:12.643652    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:12.644075    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:12.681260    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:12.681400    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:12.703931    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:12.704025    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:12.719089    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:12.719170    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:12.732968    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:12.733032    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:12.743943    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:12.744021    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:12.755549    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:12.755623    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:12.766453    8672 logs.go:276] 0 containers: []
	W0731 12:35:12.766463    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:12.766525    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:12.777717    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:12.777734    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:12.777740    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:12.813599    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:12.813611    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:12.826758    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:12.826772    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:12.838420    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:12.838430    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:12.865856    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:12.865867    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:12.878285    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:12.878303    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:12.913661    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:12.913673    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:12.925350    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:12.925363    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:12.939507    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:12.939518    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:12.951685    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:12.951696    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:12.963267    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:12.963282    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:12.988212    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:12.988224    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:12.993196    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:12.993203    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:13.007340    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:13.007351    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:13.020919    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:13.020934    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:15.538841    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:20.541060    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:20.541312    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:20.555202    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:20.555293    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:20.566434    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:20.566501    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:20.577706    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:20.577776    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:20.588477    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:20.588545    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:20.598974    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:20.599044    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:20.610152    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:20.610227    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:20.620012    8672 logs.go:276] 0 containers: []
	W0731 12:35:20.620023    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:20.620082    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:20.645892    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:20.645913    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:20.645919    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:20.659525    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:20.659537    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:20.671574    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:20.671587    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:20.683775    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:20.683785    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:20.695576    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:20.695590    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:20.700567    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:20.700576    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:20.718253    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:20.718264    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:20.735044    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:20.735058    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:20.753315    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:20.753326    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:20.788878    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:20.788888    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:20.830242    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:20.830256    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:20.842188    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:20.842199    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:20.855379    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:20.855393    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:20.872279    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:20.872290    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:20.888729    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:20.888740    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:23.413953    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:28.416475    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:28.416796    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:28.444407    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:28.444529    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:28.462136    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:28.462226    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:28.479699    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:28.479776    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:28.490902    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:28.490963    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:28.503192    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:28.503254    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:28.513739    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:28.513811    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:28.523616    8672 logs.go:276] 0 containers: []
	W0731 12:35:28.523625    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:28.523675    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:28.534142    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:28.534159    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:28.534163    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:28.571226    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:28.571241    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:28.583492    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:28.583503    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:28.595115    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:28.595124    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:28.631670    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:28.631682    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:28.643352    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:28.643364    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:28.655252    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:28.655264    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:28.671441    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:28.671453    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:28.695676    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:28.695686    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:28.709899    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:28.709910    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:28.721297    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:28.721308    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:28.733153    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:28.733164    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:28.748369    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:28.748382    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:28.752645    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:28.752651    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:28.786463    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:28.786476    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:31.306436    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:36.308439    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:36.308575    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:36.322741    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:36.322818    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:36.334342    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:36.334405    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:36.345340    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:36.345412    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:36.355964    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:36.356025    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:36.366348    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:36.366413    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:36.376973    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:36.377041    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:36.386889    8672 logs.go:276] 0 containers: []
	W0731 12:35:36.386900    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:36.386961    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:36.397367    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:36.397383    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:36.397388    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:36.433630    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:36.433644    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:36.448257    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:36.448267    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:36.459893    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:36.459908    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:36.471227    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:36.471240    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:36.482958    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:36.482970    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:36.509213    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:36.509222    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:36.521558    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:36.521572    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:36.532961    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:36.532971    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:36.544943    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:36.544953    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:36.560349    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:36.560359    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:36.594705    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:36.594714    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:36.612775    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:36.612788    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:36.617149    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:36.617159    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:36.631062    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:36.631072    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:39.145375    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:44.147635    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:44.147805    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:44.163345    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:44.163427    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:44.175374    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:44.175444    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:44.186493    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:44.186570    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:44.196698    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:44.196767    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:44.207002    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:44.207077    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:44.217275    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:44.217340    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:44.227005    8672 logs.go:276] 0 containers: []
	W0731 12:35:44.227021    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:44.227087    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:44.237427    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:44.237443    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:44.237449    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:44.254263    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:44.254274    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:44.287477    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:44.287485    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:44.292105    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:44.292114    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:44.306528    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:44.306538    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:44.318061    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:44.318071    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:44.352770    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:44.352780    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:44.364732    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:44.364743    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:44.376534    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:44.376545    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:44.388506    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:44.388519    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:44.400160    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:44.400172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:44.411740    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:44.411752    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:44.426029    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:44.426040    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:44.437836    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:44.437849    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:44.452481    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:44.452491    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:46.977908    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:51.980078    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:51.980185    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:51.991900    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:51.991971    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:52.002758    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:52.002828    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:52.015892    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:52.015958    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:52.026375    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:52.026446    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:52.036325    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:52.036389    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:52.047405    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:52.047471    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:52.057331    8672 logs.go:276] 0 containers: []
	W0731 12:35:52.057340    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:52.057397    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:52.067871    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:52.067888    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:52.067893    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:52.101556    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:35:52.101564    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:35:52.117269    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:52.117281    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:52.133402    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:52.133411    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:52.158516    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:35:52.158525    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:35:52.173675    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:35:52.173685    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:35:52.187581    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:35:52.187592    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:35:52.201244    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:35:52.201255    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:35:52.220313    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:35:52.220330    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:35:52.231670    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:35:52.231681    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:52.243687    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:52.243700    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:52.247950    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:52.247957    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:52.281469    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:52.281484    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:35:52.300178    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:35:52.300191    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:35:52.312624    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:35:52.312638    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:35:54.825809    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:59.828383    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:59.828581    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:59.843328    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:35:59.843410    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:59.855275    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:35:59.855352    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:59.867013    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:35:59.867092    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:59.878707    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:35:59.878778    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:59.889621    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:35:59.889695    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:59.901380    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:35:59.901451    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:59.912091    8672 logs.go:276] 0 containers: []
	W0731 12:35:59.912104    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:59.912167    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:59.927656    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:35:59.927672    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:35:59.927678    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:35:59.946702    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:59.946712    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:59.951845    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:59.951851    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:59.990559    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:35:59.990570    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:00.002457    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:00.002469    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:00.015081    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:00.015093    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:00.027855    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:00.027869    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:00.047323    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:00.047338    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:00.063259    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:00.063274    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:00.078285    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:00.078298    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:00.096377    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:00.096386    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:00.107798    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:00.107809    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:00.119346    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:00.119356    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:00.155057    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:00.155065    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:00.172143    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:00.172153    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:02.698123    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:07.700432    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:07.700593    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:07.714260    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:07.714341    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:07.725010    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:07.725077    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:07.735506    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:07.735585    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:07.750397    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:07.750471    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:07.760737    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:07.760805    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:07.771222    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:07.771285    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:07.781349    8672 logs.go:276] 0 containers: []
	W0731 12:36:07.781359    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:07.781421    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:07.791428    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:07.791443    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:07.791448    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:07.803472    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:07.803483    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:07.808037    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:07.808044    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:07.820902    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:07.820912    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:07.832359    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:07.832370    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:07.852944    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:07.852955    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:07.867097    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:07.867109    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:07.883802    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:07.883812    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:07.899662    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:07.899672    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:07.911924    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:07.911936    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:07.923574    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:07.923586    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:07.947202    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:07.947211    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:07.980870    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:07.980880    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:08.029424    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:08.029435    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:08.041311    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:08.041322    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:10.561079    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:15.563387    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:15.563544    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:15.580324    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:15.580407    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:15.592249    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:15.592322    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:15.603050    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:15.603119    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:15.613814    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:15.613884    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:15.624983    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:15.625055    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:15.640565    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:15.640635    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:15.653774    8672 logs.go:276] 0 containers: []
	W0731 12:36:15.653784    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:15.653839    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:15.663697    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:15.663714    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:15.663719    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:15.675251    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:15.675262    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:15.687253    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:15.687264    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:15.699030    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:15.699041    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:15.711004    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:15.711019    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:15.728506    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:15.728515    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:15.752194    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:15.752202    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:15.764043    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:15.764053    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:15.775215    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:15.775224    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:15.786625    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:15.786636    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:15.791666    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:15.791673    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:15.825830    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:15.825843    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:15.841057    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:15.841067    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:15.856271    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:15.856280    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:15.870951    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:15.870962    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:18.408592    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:23.410921    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:23.411168    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:23.436184    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:23.436303    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:23.452827    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:23.452906    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:23.472025    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:23.472094    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:23.482159    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:23.482230    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:23.492878    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:23.492946    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:23.509489    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:23.509559    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:23.520062    8672 logs.go:276] 0 containers: []
	W0731 12:36:23.520073    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:23.520131    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:23.530888    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:23.530903    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:23.530909    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:23.566434    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:23.566448    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:23.578652    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:23.578664    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:23.590428    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:23.590442    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:23.626548    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:23.626555    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:23.631267    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:23.631272    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:23.648604    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:23.648614    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:23.660527    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:23.660543    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:23.682578    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:23.682589    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:23.696835    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:23.696845    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:23.709052    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:23.709064    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:23.720854    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:23.720863    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:23.732585    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:23.732595    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:23.747305    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:23.747315    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:23.759396    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:23.759406    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:26.286524    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:31.287294    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:31.287462    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:36:31.305055    8672 logs.go:276] 1 containers: [8a82cab0c91a]
	I0731 12:36:31.305140    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:36:31.318009    8672 logs.go:276] 1 containers: [f4020ba406b1]
	I0731 12:36:31.318082    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:36:31.329392    8672 logs.go:276] 4 containers: [78a04aba8c8e 6000197f85bd 98a9f1546cfd 0fd228a32104]
	I0731 12:36:31.329469    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:36:31.340913    8672 logs.go:276] 1 containers: [ad73fdf5e6b1]
	I0731 12:36:31.340981    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:36:31.351106    8672 logs.go:276] 1 containers: [d01b808eed3e]
	I0731 12:36:31.351171    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:36:31.361218    8672 logs.go:276] 1 containers: [5c31bf72c473]
	I0731 12:36:31.361276    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:36:31.370710    8672 logs.go:276] 0 containers: []
	W0731 12:36:31.370720    8672 logs.go:278] No container was found matching "kindnet"
	I0731 12:36:31.370771    8672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:36:31.380850    8672 logs.go:276] 1 containers: [0af8094957c2]
	I0731 12:36:31.380870    8672 logs.go:123] Gathering logs for kube-apiserver [8a82cab0c91a] ...
	I0731 12:36:31.380875    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a82cab0c91a"
	I0731 12:36:31.397176    8672 logs.go:123] Gathering logs for coredns [6000197f85bd] ...
	I0731 12:36:31.397186    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6000197f85bd"
	I0731 12:36:31.408442    8672 logs.go:123] Gathering logs for kube-scheduler [ad73fdf5e6b1] ...
	I0731 12:36:31.408452    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad73fdf5e6b1"
	I0731 12:36:31.423651    8672 logs.go:123] Gathering logs for storage-provisioner [0af8094957c2] ...
	I0731 12:36:31.423662    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0af8094957c2"
	I0731 12:36:31.435077    8672 logs.go:123] Gathering logs for Docker ...
	I0731 12:36:31.435087    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:36:31.460162    8672 logs.go:123] Gathering logs for kubelet ...
	I0731 12:36:31.460172    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:36:31.494773    8672 logs.go:123] Gathering logs for coredns [0fd228a32104] ...
	I0731 12:36:31.494781    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fd228a32104"
	I0731 12:36:31.513090    8672 logs.go:123] Gathering logs for kube-controller-manager [5c31bf72c473] ...
	I0731 12:36:31.513101    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c31bf72c473"
	I0731 12:36:31.534394    8672 logs.go:123] Gathering logs for dmesg ...
	I0731 12:36:31.534405    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:36:31.539124    8672 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:36:31.539131    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:36:31.573811    8672 logs.go:123] Gathering logs for coredns [98a9f1546cfd] ...
	I0731 12:36:31.573822    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98a9f1546cfd"
	I0731 12:36:31.585610    8672 logs.go:123] Gathering logs for etcd [f4020ba406b1] ...
	I0731 12:36:31.585619    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4020ba406b1"
	I0731 12:36:31.600305    8672 logs.go:123] Gathering logs for coredns [78a04aba8c8e] ...
	I0731 12:36:31.600316    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a04aba8c8e"
	I0731 12:36:31.612277    8672 logs.go:123] Gathering logs for kube-proxy [d01b808eed3e] ...
	I0731 12:36:31.612291    8672 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d01b808eed3e"
	I0731 12:36:31.627052    8672 logs.go:123] Gathering logs for container status ...
	I0731 12:36:31.627063    8672 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:36:34.140799    8672 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:36:39.142601    8672 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:36:39.146874    8672 out.go:177] 
	W0731 12:36:39.149950    8672 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:36:39.149956    8672 out.go:239] * 
	* 
	W0731 12:36:39.150409    8672 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:36:39.165833    8672 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-443000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (585.55s)

                                                
                                    
x
+
TestPause/serial/Start (10.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-006000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-006000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.070842041s)

                                                
                                                
-- stdout --
	* [pause-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-006000" primary control-plane node in "pause-006000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-006000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-006000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-006000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-006000 -n pause-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-006000 -n pause-006000: exit status 7 (50.805583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-492000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-492000 --driver=qemu2 : exit status 80 (9.738562833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-492000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-492000" primary control-plane node in "NoKubernetes-492000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-492000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-492000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000: exit status 7 (68.40875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --driver=qemu2 : exit status 80 (7.472246625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-492000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-492000
	* Restarting existing qemu2 VM for "NoKubernetes-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000: exit status 7 (33.295458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.66s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19360
- KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1274334768/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.66s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.5s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19360
- KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3405196724/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244104083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-492000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-492000
	* Restarting existing qemu2 VM for "NoKubernetes-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000: exit status 7 (30.82925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-492000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-492000 --driver=qemu2 : exit status 80 (5.276673166s)

                                                
                                                
-- stdout --
	* [NoKubernetes-492000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-492000
	* Restarting existing qemu2 VM for "NoKubernetes-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-492000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-492000 -n NoKubernetes-492000: exit status 7 (69.306541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.019404291s)

                                                
                                                
-- stdout --
	* [auto-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-782000" primary control-plane node in "auto-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:19.027763    9369 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:19.027890    9369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:19.027893    9369 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:19.027896    9369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:19.028025    9369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:38:19.029038    9369 out.go:298] Setting JSON to false
	I0731 12:38:19.045055    9369 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5868,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:38:19.045128    9369 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:19.051935    9369 out.go:177] * [auto-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:19.058880    9369 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:38:19.058960    9369 notify.go:220] Checking for updates...
	I0731 12:38:19.065878    9369 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:38:19.068856    9369 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:19.072843    9369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:19.075898    9369 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:38:19.078824    9369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:19.082256    9369 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:19.082322    9369 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:19.082368    9369 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:19.085862    9369 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:19.092829    9369 start.go:297] selected driver: qemu2
	I0731 12:38:19.092836    9369 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:19.092845    9369 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:19.095118    9369 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:19.098874    9369 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:19.102826    9369 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:19.102841    9369 cni.go:84] Creating CNI manager for ""
	I0731 12:38:19.102850    9369 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:38:19.102857    9369 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:38:19.102886    9369 start.go:340] cluster config:
	{Name:auto-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:19.106686    9369 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:19.114814    9369 out.go:177] * Starting "auto-782000" primary control-plane node in "auto-782000" cluster
	I0731 12:38:19.118777    9369 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:19.118794    9369 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:19.118807    9369 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:19.118885    9369 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:19.118891    9369 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:19.118956    9369 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/auto-782000/config.json ...
	I0731 12:38:19.118968    9369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/auto-782000/config.json: {Name:mk5a3261de365d8d75ac6271c469229d12c3070a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:19.119192    9369 start.go:360] acquireMachinesLock for auto-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:19.119228    9369 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "auto-782000"
	I0731 12:38:19.119239    9369 start.go:93] Provisioning new machine with config: &{Name:auto-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:19.119267    9369 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:19.127810    9369 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:19.146381    9369 start.go:159] libmachine.API.Create for "auto-782000" (driver="qemu2")
	I0731 12:38:19.146412    9369 client.go:168] LocalClient.Create starting
	I0731 12:38:19.146475    9369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:19.146507    9369 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:19.146518    9369 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:19.146565    9369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:19.146589    9369 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:19.146600    9369 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:19.147051    9369 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:19.294976    9369 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:19.547115    9369 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:19.547125    9369 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:19.547357    9369 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2
	I0731 12:38:19.556880    9369 main.go:141] libmachine: STDOUT: 
	I0731 12:38:19.556898    9369 main.go:141] libmachine: STDERR: 
	I0731 12:38:19.556947    9369 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2 +20000M
	I0731 12:38:19.564827    9369 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:19.564840    9369 main.go:141] libmachine: STDERR: 
	I0731 12:38:19.564856    9369 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2
	I0731 12:38:19.564863    9369 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:19.564875    9369 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:19.564897    9369 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:f3:af:06:8d:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2
	I0731 12:38:19.566485    9369 main.go:141] libmachine: STDOUT: 
	I0731 12:38:19.566498    9369 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:19.566517    9369 client.go:171] duration metric: took 420.285333ms to LocalClient.Create
	I0731 12:38:21.567903    9369 start.go:128] duration metric: took 2.449637125s to createHost
	I0731 12:38:21.568017    9369 start.go:83] releasing machines lock for "auto-782000", held for 2.449769s
	W0731 12:38:21.568112    9369 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:21.580405    9369 out.go:177] * Deleting "auto-782000" in qemu2 ...
	W0731 12:38:21.609818    9369 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:21.609843    9369 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:26.610436    9369 start.go:360] acquireMachinesLock for auto-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:26.610828    9369 start.go:364] duration metric: took 315.25µs to acquireMachinesLock for "auto-782000"
	I0731 12:38:26.610960    9369 start.go:93] Provisioning new machine with config: &{Name:auto-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:26.611265    9369 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:26.627736    9369 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:26.677442    9369 start.go:159] libmachine.API.Create for "auto-782000" (driver="qemu2")
	I0731 12:38:26.677487    9369 client.go:168] LocalClient.Create starting
	I0731 12:38:26.677595    9369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:26.677654    9369 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:26.677678    9369 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:26.677748    9369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:26.677798    9369 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:26.677812    9369 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:26.678309    9369 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:26.836014    9369 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:26.952109    9369 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:26.952116    9369 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:26.952321    9369 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2
	I0731 12:38:26.961621    9369 main.go:141] libmachine: STDOUT: 
	I0731 12:38:26.961638    9369 main.go:141] libmachine: STDERR: 
	I0731 12:38:26.961682    9369 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2 +20000M
	I0731 12:38:26.969893    9369 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:26.969912    9369 main.go:141] libmachine: STDERR: 
	I0731 12:38:26.969935    9369 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2
	I0731 12:38:26.969940    9369 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:26.969955    9369 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:26.969984    9369 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ee:43:1d:92:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/auto-782000/disk.qcow2
	I0731 12:38:26.971582    9369 main.go:141] libmachine: STDOUT: 
	I0731 12:38:26.971597    9369 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:26.971610    9369 client.go:171] duration metric: took 294.200583ms to LocalClient.Create
	I0731 12:38:28.973245    9369 start.go:128] duration metric: took 2.362580958s to createHost
	I0731 12:38:28.973303    9369 start.go:83] releasing machines lock for "auto-782000", held for 2.36308s
	W0731 12:38:28.973688    9369 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:28.983623    9369 out.go:177] 
	W0731 12:38:28.991403    9369 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:28.991443    9369 out.go:239] * 
	* 
	W0731 12:38:28.994243    9369 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:29.004146    9369 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.86569375s)

                                                
                                                
-- stdout --
	* [calico-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-782000" primary control-plane node in "calico-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:31.242113    9478 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:31.242232    9478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:31.242235    9478 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:31.242238    9478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:31.242377    9478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:38:31.243406    9478 out.go:298] Setting JSON to false
	I0731 12:38:31.259715    9478 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5880,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:38:31.259785    9478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:31.265683    9478 out.go:177] * [calico-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:31.272722    9478 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:38:31.272757    9478 notify.go:220] Checking for updates...
	I0731 12:38:31.279518    9478 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:38:31.283713    9478 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:31.287668    9478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:31.290613    9478 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:38:31.293629    9478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:31.297015    9478 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:31.297082    9478 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:31.297128    9478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:31.300661    9478 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:31.307710    9478 start.go:297] selected driver: qemu2
	I0731 12:38:31.307715    9478 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:31.307720    9478 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:31.310101    9478 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:31.311638    9478 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:31.315745    9478 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:31.315787    9478 cni.go:84] Creating CNI manager for "calico"
	I0731 12:38:31.315792    9478 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0731 12:38:31.315828    9478 start.go:340] cluster config:
	{Name:calico-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:31.319734    9478 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:31.328651    9478 out.go:177] * Starting "calico-782000" primary control-plane node in "calico-782000" cluster
	I0731 12:38:31.332648    9478 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:31.332669    9478 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:31.332682    9478 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:31.332751    9478 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:31.332757    9478 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:31.332827    9478 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/calico-782000/config.json ...
	I0731 12:38:31.332840    9478 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/calico-782000/config.json: {Name:mk777789377a8253f7a1cfd358fef95373a0b40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:31.333070    9478 start.go:360] acquireMachinesLock for calico-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:31.333105    9478 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "calico-782000"
	I0731 12:38:31.333116    9478 start.go:93] Provisioning new machine with config: &{Name:calico-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:31.333147    9478 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:31.341639    9478 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:31.359298    9478 start.go:159] libmachine.API.Create for "calico-782000" (driver="qemu2")
	I0731 12:38:31.359321    9478 client.go:168] LocalClient.Create starting
	I0731 12:38:31.359383    9478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:31.359417    9478 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:31.359429    9478 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:31.359465    9478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:31.359489    9478 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:31.359497    9478 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:31.359945    9478 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:31.507367    9478 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:31.640449    9478 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:31.640455    9478 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:31.640684    9478 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2
	I0731 12:38:31.650212    9478 main.go:141] libmachine: STDOUT: 
	I0731 12:38:31.650228    9478 main.go:141] libmachine: STDERR: 
	I0731 12:38:31.650276    9478 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2 +20000M
	I0731 12:38:31.658189    9478 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:31.658203    9478 main.go:141] libmachine: STDERR: 
	I0731 12:38:31.658213    9478 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2
	I0731 12:38:31.658218    9478 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:31.658233    9478 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:31.658267    9478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b8:74:6b:a6:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2
	I0731 12:38:31.659884    9478 main.go:141] libmachine: STDOUT: 
	I0731 12:38:31.659899    9478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:31.659917    9478 client.go:171] duration metric: took 300.654291ms to LocalClient.Create
	I0731 12:38:33.661762    9478 start.go:128] duration metric: took 2.329048458s to createHost
	I0731 12:38:33.661864    9478 start.go:83] releasing machines lock for "calico-782000", held for 2.329217458s
	W0731 12:38:33.661932    9478 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:33.669021    9478 out.go:177] * Deleting "calico-782000" in qemu2 ...
	W0731 12:38:33.695955    9478 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:33.695981    9478 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:38.697372    9478 start.go:360] acquireMachinesLock for calico-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:38.697781    9478 start.go:364] duration metric: took 329.833µs to acquireMachinesLock for "calico-782000"
	I0731 12:38:38.697898    9478 start.go:93] Provisioning new machine with config: &{Name:calico-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:38.698272    9478 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:38.714750    9478 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:38.767321    9478 start.go:159] libmachine.API.Create for "calico-782000" (driver="qemu2")
	I0731 12:38:38.767368    9478 client.go:168] LocalClient.Create starting
	I0731 12:38:38.767486    9478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:38.767556    9478 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:38.767601    9478 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:38.767673    9478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:38.767735    9478 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:38.767749    9478 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:38.768429    9478 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:38.930091    9478 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:39.014433    9478 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:39.014438    9478 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:39.014643    9478 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2
	I0731 12:38:39.023909    9478 main.go:141] libmachine: STDOUT: 
	I0731 12:38:39.023929    9478 main.go:141] libmachine: STDERR: 
	I0731 12:38:39.023987    9478 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2 +20000M
	I0731 12:38:39.031916    9478 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:39.031930    9478 main.go:141] libmachine: STDERR: 
	I0731 12:38:39.031945    9478 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2
	I0731 12:38:39.031948    9478 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:39.031957    9478 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:39.031988    9478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:9d:54:a4:1f:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/calico-782000/disk.qcow2
	I0731 12:38:39.033621    9478 main.go:141] libmachine: STDOUT: 
	I0731 12:38:39.033636    9478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:39.033648    9478 client.go:171] duration metric: took 266.311625ms to LocalClient.Create
	I0731 12:38:41.035544    9478 start.go:128] duration metric: took 2.337552084s to createHost
	I0731 12:38:41.035592    9478 start.go:83] releasing machines lock for "calico-782000", held for 2.33810175s
	W0731 12:38:41.035952    9478 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:41.050670    9478 out.go:177] 
	W0731 12:38:41.053728    9478 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:41.053759    9478 out.go:239] * 
	* 
	W0731 12:38:41.056499    9478 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:41.063619    9478 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.913440542s)

                                                
                                                
-- stdout --
	* [custom-flannel-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-782000" primary control-plane node in "custom-flannel-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:43.465565    9599 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:43.465692    9599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:43.465695    9599 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:43.465697    9599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:43.465818    9599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:38:43.466944    9599 out.go:298] Setting JSON to false
	I0731 12:38:43.483032    9599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5892,"bootTime":1722448831,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:38:43.483119    9599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:43.489342    9599 out.go:177] * [custom-flannel-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:43.496202    9599 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:38:43.496270    9599 notify.go:220] Checking for updates...
	I0731 12:38:43.503298    9599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:38:43.506240    9599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:43.510281    9599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:43.513335    9599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:38:43.516231    9599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:43.519625    9599 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:43.519694    9599 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:43.519737    9599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:43.524364    9599 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:43.531286    9599 start.go:297] selected driver: qemu2
	I0731 12:38:43.531299    9599 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:43.531307    9599 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:43.533690    9599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:43.537266    9599 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:43.540337    9599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:43.540385    9599 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 12:38:43.540394    9599 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0731 12:38:43.540429    9599 start.go:340] cluster config:
	{Name:custom-flannel-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:43.544263    9599 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:43.551293    9599 out.go:177] * Starting "custom-flannel-782000" primary control-plane node in "custom-flannel-782000" cluster
	I0731 12:38:43.555267    9599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:43.555286    9599 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:43.555298    9599 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:43.555357    9599 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:43.555362    9599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:43.555432    9599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/custom-flannel-782000/config.json ...
	I0731 12:38:43.555443    9599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/custom-flannel-782000/config.json: {Name:mkb9f566ce0cf4b0b6e6ccd0b040c5d2322662ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:43.555650    9599 start.go:360] acquireMachinesLock for custom-flannel-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:43.555685    9599 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "custom-flannel-782000"
	I0731 12:38:43.555695    9599 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:43.555727    9599 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:43.561285    9599 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:43.579082    9599 start.go:159] libmachine.API.Create for "custom-flannel-782000" (driver="qemu2")
	I0731 12:38:43.579116    9599 client.go:168] LocalClient.Create starting
	I0731 12:38:43.579176    9599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:43.579210    9599 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:43.579224    9599 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:43.579270    9599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:43.579292    9599 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:43.579300    9599 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:43.579671    9599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:43.730363    9599 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:43.886478    9599 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:43.886484    9599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:43.886706    9599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2
	I0731 12:38:43.896125    9599 main.go:141] libmachine: STDOUT: 
	I0731 12:38:43.896145    9599 main.go:141] libmachine: STDERR: 
	I0731 12:38:43.896190    9599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2 +20000M
	I0731 12:38:43.904101    9599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:43.904121    9599 main.go:141] libmachine: STDERR: 
	I0731 12:38:43.904133    9599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2
	I0731 12:38:43.904138    9599 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:43.904146    9599 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:43.904171    9599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:cd:15:ff:e3:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2
	I0731 12:38:43.905827    9599 main.go:141] libmachine: STDOUT: 
	I0731 12:38:43.905840    9599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:43.905856    9599 client.go:171] duration metric: took 326.771666ms to LocalClient.Create
	I0731 12:38:45.907819    9599 start.go:128] duration metric: took 2.352316208s to createHost
	I0731 12:38:45.907899    9599 start.go:83] releasing machines lock for "custom-flannel-782000", held for 2.352445833s
	W0731 12:38:45.908002    9599 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:45.924110    9599 out.go:177] * Deleting "custom-flannel-782000" in qemu2 ...
	W0731 12:38:45.950120    9599 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:45.950146    9599 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:50.951941    9599 start.go:360] acquireMachinesLock for custom-flannel-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:50.952462    9599 start.go:364] duration metric: took 403.291µs to acquireMachinesLock for "custom-flannel-782000"
	I0731 12:38:50.952574    9599 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:50.952827    9599 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:50.967481    9599 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:51.015470    9599 start.go:159] libmachine.API.Create for "custom-flannel-782000" (driver="qemu2")
	I0731 12:38:51.015516    9599 client.go:168] LocalClient.Create starting
	I0731 12:38:51.015622    9599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:51.015694    9599 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:51.015712    9599 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:51.015783    9599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:51.015835    9599 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:51.015845    9599 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:51.016343    9599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:51.175593    9599 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:51.283235    9599 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:51.283240    9599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:51.283451    9599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2
	I0731 12:38:51.292675    9599 main.go:141] libmachine: STDOUT: 
	I0731 12:38:51.292700    9599 main.go:141] libmachine: STDERR: 
	I0731 12:38:51.292747    9599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2 +20000M
	I0731 12:38:51.300576    9599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:51.300594    9599 main.go:141] libmachine: STDERR: 
	I0731 12:38:51.300607    9599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2
	I0731 12:38:51.300611    9599 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:51.300619    9599 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:51.300650    9599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d7:cd:fd:24:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/custom-flannel-782000/disk.qcow2
	I0731 12:38:51.302334    9599 main.go:141] libmachine: STDOUT: 
	I0731 12:38:51.302356    9599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:51.302376    9599 client.go:171] duration metric: took 286.877709ms to LocalClient.Create
	I0731 12:38:53.304403    9599 start.go:128] duration metric: took 2.351720666s to createHost
	I0731 12:38:53.304477    9599 start.go:83] releasing machines lock for "custom-flannel-782000", held for 2.352159666s
	W0731 12:38:53.304790    9599 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:53.315302    9599 out.go:177] 
	W0731 12:38:53.322409    9599 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:53.322451    9599 out.go:239] * 
	* 
	W0731 12:38:53.325036    9599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:53.334274    9599 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.843725834s)

                                                
                                                
-- stdout --
	* [false-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-782000" primary control-plane node in "false-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:55.730829    9716 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:55.731194    9716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:55.731199    9716 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:55.731202    9716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:55.731401    9716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:38:55.732786    9716 out.go:298] Setting JSON to false
	I0731 12:38:55.749658    9716 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5904,"bootTime":1722448831,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:38:55.749729    9716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:55.756345    9716 out.go:177] * [false-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:55.764245    9716 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:38:55.764311    9716 notify.go:220] Checking for updates...
	I0731 12:38:55.771232    9716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:38:55.774204    9716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:55.777181    9716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:55.780174    9716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:38:55.783212    9716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:55.785006    9716 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:55.785080    9716 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:55.785128    9716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:55.789211    9716 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:55.796036    9716 start.go:297] selected driver: qemu2
	I0731 12:38:55.796043    9716 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:55.796050    9716 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:55.798356    9716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:55.802194    9716 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:55.805231    9716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:55.805265    9716 cni.go:84] Creating CNI manager for "false"
	I0731 12:38:55.805294    9716 start.go:340] cluster config:
	{Name:false-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:55.809015    9716 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:55.816160    9716 out.go:177] * Starting "false-782000" primary control-plane node in "false-782000" cluster
	I0731 12:38:55.820344    9716 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:55.820362    9716 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:55.820379    9716 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:55.820447    9716 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:55.820453    9716 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:55.820528    9716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/false-782000/config.json ...
	I0731 12:38:55.820549    9716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/false-782000/config.json: {Name:mkb84bf560ae7b6773ba4d9e331ba8d1fe1e710b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:55.820784    9716 start.go:360] acquireMachinesLock for false-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:55.820820    9716 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "false-782000"
	I0731 12:38:55.820831    9716 start.go:93] Provisioning new machine with config: &{Name:false-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:55.820875    9716 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:55.829227    9716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:55.847403    9716 start.go:159] libmachine.API.Create for "false-782000" (driver="qemu2")
	I0731 12:38:55.847430    9716 client.go:168] LocalClient.Create starting
	I0731 12:38:55.847488    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:38:55.847518    9716 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:55.847527    9716 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:55.847562    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:38:55.847585    9716 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:55.847592    9716 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:55.847988    9716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:38:55.996443    9716 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:56.108670    9716 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:56.108676    9716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:56.108874    9716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2
	I0731 12:38:56.118083    9716 main.go:141] libmachine: STDOUT: 
	I0731 12:38:56.118098    9716 main.go:141] libmachine: STDERR: 
	I0731 12:38:56.118146    9716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2 +20000M
	I0731 12:38:56.126005    9716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:56.126018    9716 main.go:141] libmachine: STDERR: 
	I0731 12:38:56.126039    9716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2
	I0731 12:38:56.126044    9716 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:56.126056    9716 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:56.126085    9716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f6:fa:46:aa:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2
	I0731 12:38:56.127722    9716 main.go:141] libmachine: STDOUT: 
	I0731 12:38:56.127743    9716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:56.127759    9716 client.go:171] duration metric: took 280.342042ms to LocalClient.Create
	I0731 12:38:58.129820    9716 start.go:128] duration metric: took 2.30906525s to createHost
	I0731 12:38:58.129932    9716 start.go:83] releasing machines lock for "false-782000", held for 2.30920075s
	W0731 12:38:58.129994    9716 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:58.140886    9716 out.go:177] * Deleting "false-782000" in qemu2 ...
	W0731 12:38:58.174182    9716 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:58.174229    9716 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:03.174299    9716 start.go:360] acquireMachinesLock for false-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:03.174842    9716 start.go:364] duration metric: took 446.667µs to acquireMachinesLock for "false-782000"
	I0731 12:39:03.175125    9716 start.go:93] Provisioning new machine with config: &{Name:false-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:03.175425    9716 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:03.191062    9716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:03.240572    9716 start.go:159] libmachine.API.Create for "false-782000" (driver="qemu2")
	I0731 12:39:03.240630    9716 client.go:168] LocalClient.Create starting
	I0731 12:39:03.240761    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:03.240853    9716 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:03.240871    9716 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:03.240939    9716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:03.240984    9716 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:03.240998    9716 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:03.241537    9716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:03.400964    9716 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:03.481788    9716 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:03.481793    9716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:03.481992    9716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2
	I0731 12:39:03.491426    9716 main.go:141] libmachine: STDOUT: 
	I0731 12:39:03.491446    9716 main.go:141] libmachine: STDERR: 
	I0731 12:39:03.491492    9716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2 +20000M
	I0731 12:39:03.499308    9716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:03.499322    9716 main.go:141] libmachine: STDERR: 
	I0731 12:39:03.499340    9716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2
	I0731 12:39:03.499343    9716 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:03.499353    9716 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:03.499382    9716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:52:f0:02:8c:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/false-782000/disk.qcow2
	I0731 12:39:03.501033    9716 main.go:141] libmachine: STDOUT: 
	I0731 12:39:03.501045    9716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:03.501058    9716 client.go:171] duration metric: took 260.432708ms to LocalClient.Create
	I0731 12:39:05.503176    9716 start.go:128] duration metric: took 2.327815167s to createHost
	I0731 12:39:05.503308    9716 start.go:83] releasing machines lock for "false-782000", held for 2.328527208s
	W0731 12:39:05.503693    9716 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:05.512456    9716 out.go:177] 
	W0731 12:39:05.519553    9716 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:05.519585    9716 out.go:239] * 
	* 
	W0731 12:39:05.522371    9716 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:05.531434    9716 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.809440541s)

                                                
                                                
-- stdout --
	* [kindnet-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-782000" primary control-plane node in "kindnet-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:07.748258    9828 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:07.748366    9828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:07.748369    9828 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:07.748372    9828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:07.748497    9828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:39:07.749507    9828 out.go:298] Setting JSON to false
	I0731 12:39:07.765693    9828 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5916,"bootTime":1722448831,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:39:07.765808    9828 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:07.772111    9828 out.go:177] * [kindnet-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:07.779059    9828 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:39:07.779134    9828 notify.go:220] Checking for updates...
	I0731 12:39:07.786084    9828 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:39:07.790051    9828 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:07.794047    9828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:07.797069    9828 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:39:07.799991    9828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:07.803329    9828 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:07.803396    9828 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:07.803437    9828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:07.807108    9828 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:07.814067    9828 start.go:297] selected driver: qemu2
	I0731 12:39:07.814073    9828 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:07.814081    9828 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:07.816636    9828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:07.820101    9828 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:07.824152    9828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:07.824187    9828 cni.go:84] Creating CNI manager for "kindnet"
	I0731 12:39:07.824191    9828 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:39:07.824227    9828 start.go:340] cluster config:
	{Name:kindnet-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:07.828217    9828 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:07.837057    9828 out.go:177] * Starting "kindnet-782000" primary control-plane node in "kindnet-782000" cluster
	I0731 12:39:07.841041    9828 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:39:07.841059    9828 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:39:07.841074    9828 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:07.841156    9828 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:07.841162    9828 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:39:07.841223    9828 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/kindnet-782000/config.json ...
	I0731 12:39:07.841234    9828 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/kindnet-782000/config.json: {Name:mk0009fc8892a68326c2e20e4be98ceba838c9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:07.841465    9828 start.go:360] acquireMachinesLock for kindnet-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:07.841506    9828 start.go:364] duration metric: took 34.167µs to acquireMachinesLock for "kindnet-782000"
	I0731 12:39:07.841517    9828 start.go:93] Provisioning new machine with config: &{Name:kindnet-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:07.841550    9828 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:07.846064    9828 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:07.864101    9828 start.go:159] libmachine.API.Create for "kindnet-782000" (driver="qemu2")
	I0731 12:39:07.864141    9828 client.go:168] LocalClient.Create starting
	I0731 12:39:07.864207    9828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:07.864238    9828 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:07.864247    9828 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:07.864284    9828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:07.864309    9828 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:07.864317    9828 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:07.864679    9828 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:08.013142    9828 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:08.106901    9828 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:08.106907    9828 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:08.107116    9828 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2
	I0731 12:39:08.116157    9828 main.go:141] libmachine: STDOUT: 
	I0731 12:39:08.116174    9828 main.go:141] libmachine: STDERR: 
	I0731 12:39:08.116229    9828 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2 +20000M
	I0731 12:39:08.123939    9828 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:08.123953    9828 main.go:141] libmachine: STDERR: 
	I0731 12:39:08.123976    9828 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2
	I0731 12:39:08.123985    9828 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:08.123996    9828 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:08.124020    9828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:70:f8:66:88:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2
	I0731 12:39:08.125601    9828 main.go:141] libmachine: STDOUT: 
	I0731 12:39:08.125616    9828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:08.125633    9828 client.go:171] duration metric: took 261.495583ms to LocalClient.Create
	I0731 12:39:10.127734    9828 start.go:128] duration metric: took 2.286253209s to createHost
	I0731 12:39:10.127799    9828 start.go:83] releasing machines lock for "kindnet-782000", held for 2.286372208s
	W0731 12:39:10.127919    9828 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:10.139274    9828 out.go:177] * Deleting "kindnet-782000" in qemu2 ...
	W0731 12:39:10.167545    9828 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:10.167568    9828 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:15.169592    9828 start.go:360] acquireMachinesLock for kindnet-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:15.170126    9828 start.go:364] duration metric: took 358.041µs to acquireMachinesLock for "kindnet-782000"
	I0731 12:39:15.170256    9828 start.go:93] Provisioning new machine with config: &{Name:kindnet-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:15.170566    9828 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:15.179100    9828 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:15.226140    9828 start.go:159] libmachine.API.Create for "kindnet-782000" (driver="qemu2")
	I0731 12:39:15.226198    9828 client.go:168] LocalClient.Create starting
	I0731 12:39:15.226316    9828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:15.226372    9828 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:15.226389    9828 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:15.226455    9828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:15.226499    9828 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:15.226511    9828 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:15.227086    9828 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:15.387325    9828 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:15.462822    9828 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:15.462827    9828 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:15.463021    9828 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2
	I0731 12:39:15.472438    9828 main.go:141] libmachine: STDOUT: 
	I0731 12:39:15.472454    9828 main.go:141] libmachine: STDERR: 
	I0731 12:39:15.472505    9828 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2 +20000M
	I0731 12:39:15.480337    9828 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:15.480351    9828 main.go:141] libmachine: STDERR: 
	I0731 12:39:15.480367    9828 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2
	I0731 12:39:15.480370    9828 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:15.480387    9828 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:15.480420    9828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:20:4e:f4:38:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kindnet-782000/disk.qcow2
	I0731 12:39:15.482101    9828 main.go:141] libmachine: STDOUT: 
	I0731 12:39:15.482116    9828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:15.482129    9828 client.go:171] duration metric: took 255.935208ms to LocalClient.Create
	I0731 12:39:17.484245    9828 start.go:128] duration metric: took 2.313722958s to createHost
	I0731 12:39:17.484293    9828 start.go:83] releasing machines lock for "kindnet-782000", held for 2.314212666s
	W0731 12:39:17.484651    9828 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:17.493838    9828 out.go:177] 
	W0731 12:39:17.503145    9828 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:17.503168    9828 out.go:239] * 
	* 
	W0731 12:39:17.505511    9828 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:17.513964    9828 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.799942833s)

                                                
                                                
-- stdout --
	* [flannel-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-782000" primary control-plane node in "flannel-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:19.767492    9943 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:19.767647    9943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:19.767650    9943 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:19.767652    9943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:19.767774    9943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:39:19.768842    9943 out.go:298] Setting JSON to false
	I0731 12:39:19.784870    9943 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5928,"bootTime":1722448831,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:39:19.784929    9943 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:19.791664    9943 out.go:177] * [flannel-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:19.797672    9943 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:39:19.797732    9943 notify.go:220] Checking for updates...
	I0731 12:39:19.805592    9943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:39:19.809563    9943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:19.813567    9943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:19.816547    9943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:39:19.819656    9943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:19.822892    9943 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:19.822961    9943 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:19.823012    9943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:19.827547    9943 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:19.834627    9943 start.go:297] selected driver: qemu2
	I0731 12:39:19.834634    9943 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:19.834640    9943 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:19.836959    9943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:19.840515    9943 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:19.851323    9943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:19.851343    9943 cni.go:84] Creating CNI manager for "flannel"
	I0731 12:39:19.851348    9943 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0731 12:39:19.851392    9943 start.go:340] cluster config:
	{Name:flannel-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:19.855326    9943 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:19.863621    9943 out.go:177] * Starting "flannel-782000" primary control-plane node in "flannel-782000" cluster
	I0731 12:39:19.867529    9943 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:39:19.867545    9943 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:39:19.867560    9943 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:19.867627    9943 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:19.867633    9943 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:39:19.867701    9943 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/flannel-782000/config.json ...
	I0731 12:39:19.867713    9943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/flannel-782000/config.json: {Name:mk7dac1eb262254567c22e04ea4d9ebd5abd2a91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:19.867930    9943 start.go:360] acquireMachinesLock for flannel-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:19.867966    9943 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "flannel-782000"
	I0731 12:39:19.867977    9943 start.go:93] Provisioning new machine with config: &{Name:flannel-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:19.868003    9943 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:19.876546    9943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:19.895481    9943 start.go:159] libmachine.API.Create for "flannel-782000" (driver="qemu2")
	I0731 12:39:19.895511    9943 client.go:168] LocalClient.Create starting
	I0731 12:39:19.895578    9943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:19.895619    9943 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:19.895629    9943 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:19.895669    9943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:19.895695    9943 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:19.895707    9943 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:19.896079    9943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:20.043933    9943 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:20.102173    9943 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:20.102183    9943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:20.102406    9943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2
	I0731 12:39:20.111435    9943 main.go:141] libmachine: STDOUT: 
	I0731 12:39:20.111452    9943 main.go:141] libmachine: STDERR: 
	I0731 12:39:20.111494    9943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2 +20000M
	I0731 12:39:20.119359    9943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:20.119372    9943 main.go:141] libmachine: STDERR: 
	I0731 12:39:20.119385    9943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2
	I0731 12:39:20.119390    9943 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:20.119405    9943 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:20.119441    9943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:1d:35:11:32:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2
	I0731 12:39:20.121098    9943 main.go:141] libmachine: STDOUT: 
	I0731 12:39:20.121111    9943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:20.121129    9943 client.go:171] duration metric: took 225.618917ms to LocalClient.Create
	I0731 12:39:22.123270    9943 start.go:128] duration metric: took 2.255312916s to createHost
	I0731 12:39:22.123325    9943 start.go:83] releasing machines lock for "flannel-782000", held for 2.255413834s
	W0731 12:39:22.123393    9943 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:22.139557    9943 out.go:177] * Deleting "flannel-782000" in qemu2 ...
	W0731 12:39:22.165939    9943 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:22.165961    9943 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:27.167975    9943 start.go:360] acquireMachinesLock for flannel-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:27.168397    9943 start.go:364] duration metric: took 343.042µs to acquireMachinesLock for "flannel-782000"
	I0731 12:39:27.168507    9943 start.go:93] Provisioning new machine with config: &{Name:flannel-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:27.168768    9943 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:27.185440    9943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:27.236777    9943 start.go:159] libmachine.API.Create for "flannel-782000" (driver="qemu2")
	I0731 12:39:27.236822    9943 client.go:168] LocalClient.Create starting
	I0731 12:39:27.236918    9943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:27.236987    9943 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:27.237003    9943 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:27.237062    9943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:27.237112    9943 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:27.237131    9943 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:27.237636    9943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:27.396619    9943 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:27.474382    9943 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:27.474388    9943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:27.474890    9943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2
	I0731 12:39:27.483878    9943 main.go:141] libmachine: STDOUT: 
	I0731 12:39:27.483894    9943 main.go:141] libmachine: STDERR: 
	I0731 12:39:27.483966    9943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2 +20000M
	I0731 12:39:27.491890    9943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:27.491903    9943 main.go:141] libmachine: STDERR: 
	I0731 12:39:27.491916    9943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2
	I0731 12:39:27.491922    9943 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:27.491933    9943 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:27.491968    9943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:6f:e5:1e:9c:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/flannel-782000/disk.qcow2
	I0731 12:39:27.493550    9943 main.go:141] libmachine: STDOUT: 
	I0731 12:39:27.493573    9943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:27.493585    9943 client.go:171] duration metric: took 256.764917ms to LocalClient.Create
	I0731 12:39:29.495703    9943 start.go:128] duration metric: took 2.326972792s to createHost
	I0731 12:39:29.495753    9943 start.go:83] releasing machines lock for "flannel-782000", held for 2.327394416s
	W0731 12:39:29.496167    9943 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:29.506877    9943 out.go:177] 
	W0731 12:39:29.513955    9943 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:29.513987    9943 out.go:239] * 
	* 
	W0731 12:39:29.516479    9943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:29.524971    9943 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.887644666s)

                                                
                                                
-- stdout --
	* [enable-default-cni-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-782000" primary control-plane node in "enable-default-cni-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:31.855298   10062 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:31.855425   10062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:31.855428   10062 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:31.855430   10062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:31.855563   10062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:39:31.856614   10062 out.go:298] Setting JSON to false
	I0731 12:39:31.872933   10062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5940,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:39:31.872996   10062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:31.878957   10062 out.go:177] * [enable-default-cni-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:31.885886   10062 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:39:31.885926   10062 notify.go:220] Checking for updates...
	I0731 12:39:31.892703   10062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:39:31.896904   10062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:31.900882   10062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:31.903859   10062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:39:31.906817   10062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:31.910187   10062 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:31.910257   10062 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:31.910307   10062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:31.912905   10062 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:31.919895   10062 start.go:297] selected driver: qemu2
	I0731 12:39:31.919902   10062 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:31.919912   10062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:31.922243   10062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:31.923548   10062 out.go:177] * Automatically selected the socket_vmnet network
	E0731 12:39:31.927919   10062 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0731 12:39:31.927932   10062 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:31.927951   10062 cni.go:84] Creating CNI manager for "bridge"
	I0731 12:39:31.927966   10062 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:39:31.927998   10062 start.go:340] cluster config:
	{Name:enable-default-cni-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:31.931726   10062 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:31.940931   10062 out.go:177] * Starting "enable-default-cni-782000" primary control-plane node in "enable-default-cni-782000" cluster
	I0731 12:39:31.944826   10062 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:39:31.944850   10062 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:39:31.944866   10062 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:31.944932   10062 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:31.944937   10062 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:39:31.944997   10062 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/enable-default-cni-782000/config.json ...
	I0731 12:39:31.945008   10062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/enable-default-cni-782000/config.json: {Name:mk45f4ccdf9c5b1865b1c1c654b0a33261f36e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:31.945240   10062 start.go:360] acquireMachinesLock for enable-default-cni-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:31.945277   10062 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "enable-default-cni-782000"
	I0731 12:39:31.945289   10062 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:31.945327   10062 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:31.953896   10062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:31.971844   10062 start.go:159] libmachine.API.Create for "enable-default-cni-782000" (driver="qemu2")
	I0731 12:39:31.971881   10062 client.go:168] LocalClient.Create starting
	I0731 12:39:31.971946   10062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:31.971974   10062 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:31.971983   10062 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:31.972026   10062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:31.972050   10062 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:31.972058   10062 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:31.972508   10062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:32.120778   10062 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:32.254294   10062 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:32.254300   10062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:32.254523   10062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2
	I0731 12:39:32.263989   10062 main.go:141] libmachine: STDOUT: 
	I0731 12:39:32.264017   10062 main.go:141] libmachine: STDERR: 
	I0731 12:39:32.264070   10062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2 +20000M
	I0731 12:39:32.271780   10062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:32.271794   10062 main.go:141] libmachine: STDERR: 
	I0731 12:39:32.271819   10062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2
	I0731 12:39:32.271823   10062 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:32.271835   10062 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:32.271872   10062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:53:2c:5a:ec:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2
	I0731 12:39:32.273405   10062 main.go:141] libmachine: STDOUT: 
	I0731 12:39:32.273424   10062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:32.273440   10062 client.go:171] duration metric: took 301.561333ms to LocalClient.Create
	I0731 12:39:34.275555   10062 start.go:128] duration metric: took 2.330267125s to createHost
	I0731 12:39:34.275615   10062 start.go:83] releasing machines lock for "enable-default-cni-782000", held for 2.330388625s
	W0731 12:39:34.275770   10062 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:34.281859   10062 out.go:177] * Deleting "enable-default-cni-782000" in qemu2 ...
	W0731 12:39:34.310323   10062 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:34.310364   10062 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:39.312440   10062 start.go:360] acquireMachinesLock for enable-default-cni-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:39.312855   10062 start.go:364] duration metric: took 338.667µs to acquireMachinesLock for "enable-default-cni-782000"
	I0731 12:39:39.312975   10062 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:39.313205   10062 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:39.319769   10062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:39.369155   10062 start.go:159] libmachine.API.Create for "enable-default-cni-782000" (driver="qemu2")
	I0731 12:39:39.369205   10062 client.go:168] LocalClient.Create starting
	I0731 12:39:39.369316   10062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:39.369381   10062 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:39.369410   10062 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:39.369471   10062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:39.369515   10062 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:39.369526   10062 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:39.370076   10062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:39.529148   10062 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:39.648758   10062 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:39.648766   10062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:39.648977   10062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2
	I0731 12:39:39.658200   10062 main.go:141] libmachine: STDOUT: 
	I0731 12:39:39.658218   10062 main.go:141] libmachine: STDERR: 
	I0731 12:39:39.658283   10062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2 +20000M
	I0731 12:39:39.666226   10062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:39.666240   10062 main.go:141] libmachine: STDERR: 
	I0731 12:39:39.666249   10062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2
	I0731 12:39:39.666255   10062 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:39.666264   10062 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:39.666307   10062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:da:c2:4e:94:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/enable-default-cni-782000/disk.qcow2
	I0731 12:39:39.667910   10062 main.go:141] libmachine: STDOUT: 
	I0731 12:39:39.667924   10062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:39.667937   10062 client.go:171] duration metric: took 298.733083ms to LocalClient.Create
	I0731 12:39:41.670065   10062 start.go:128] duration metric: took 2.356887584s to createHost
	I0731 12:39:41.670123   10062 start.go:83] releasing machines lock for "enable-default-cni-782000", held for 2.357298459s
	W0731 12:39:41.670536   10062 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:41.683150   10062 out.go:177] 
	W0731 12:39:41.688301   10062 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:41.688355   10062 out.go:239] * 
	* 
	W0731 12:39:41.690989   10062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:41.699127   10062 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.792751209s)

                                                
                                                
-- stdout --
	* [bridge-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-782000" primary control-plane node in "bridge-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:43.844934   10171 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:43.845072   10171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:43.845075   10171 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:43.845078   10171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:43.845231   10171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:39:43.846247   10171 out.go:298] Setting JSON to false
	I0731 12:39:43.862295   10171 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5952,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:39:43.862393   10171 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:43.869286   10171 out.go:177] * [bridge-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:43.876365   10171 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:39:43.876399   10171 notify.go:220] Checking for updates...
	I0731 12:39:43.883231   10171 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:39:43.887086   10171 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:43.891290   10171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:43.894275   10171 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:39:43.895794   10171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:43.899592   10171 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:43.899656   10171 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:43.899704   10171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:43.903238   10171 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:43.909246   10171 start.go:297] selected driver: qemu2
	I0731 12:39:43.909254   10171 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:43.909261   10171 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:43.911583   10171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:43.916204   10171 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:43.917807   10171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:43.917860   10171 cni.go:84] Creating CNI manager for "bridge"
	I0731 12:39:43.917864   10171 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:39:43.917886   10171 start.go:340] cluster config:
	{Name:bridge-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:43.921490   10171 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:43.930268   10171 out.go:177] * Starting "bridge-782000" primary control-plane node in "bridge-782000" cluster
	I0731 12:39:43.934271   10171 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:39:43.934289   10171 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:39:43.934305   10171 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:43.934381   10171 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:43.934387   10171 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:39:43.934444   10171 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/bridge-782000/config.json ...
	I0731 12:39:43.934457   10171 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/bridge-782000/config.json: {Name:mk60b8c50a85076afdc06353610d8fa2c2ee4d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:43.934663   10171 start.go:360] acquireMachinesLock for bridge-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:43.934695   10171 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "bridge-782000"
	I0731 12:39:43.934705   10171 start.go:93] Provisioning new machine with config: &{Name:bridge-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:43.934742   10171 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:43.943215   10171 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:43.960582   10171 start.go:159] libmachine.API.Create for "bridge-782000" (driver="qemu2")
	I0731 12:39:43.960607   10171 client.go:168] LocalClient.Create starting
	I0731 12:39:43.960667   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:43.960701   10171 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:43.960709   10171 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:43.960747   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:43.960774   10171 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:43.960785   10171 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:43.961133   10171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:44.110387   10171 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:44.180569   10171 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:44.180582   10171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:44.180778   10171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2
	I0731 12:39:44.189818   10171 main.go:141] libmachine: STDOUT: 
	I0731 12:39:44.189835   10171 main.go:141] libmachine: STDERR: 
	I0731 12:39:44.189893   10171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2 +20000M
	I0731 12:39:44.197669   10171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:44.197683   10171 main.go:141] libmachine: STDERR: 
	I0731 12:39:44.197696   10171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2
	I0731 12:39:44.197704   10171 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:44.197716   10171 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:44.197740   10171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:5b:d8:bb:7c:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2
	I0731 12:39:44.199319   10171 main.go:141] libmachine: STDOUT: 
	I0731 12:39:44.199334   10171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:44.199351   10171 client.go:171] duration metric: took 238.745208ms to LocalClient.Create
	I0731 12:39:46.201559   10171 start.go:128] duration metric: took 2.266842792s to createHost
	I0731 12:39:46.201650   10171 start.go:83] releasing machines lock for "bridge-782000", held for 2.266995958s
	W0731 12:39:46.201712   10171 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:46.213075   10171 out.go:177] * Deleting "bridge-782000" in qemu2 ...
	W0731 12:39:46.250492   10171 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:46.250524   10171 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:51.252611   10171 start.go:360] acquireMachinesLock for bridge-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:51.253228   10171 start.go:364] duration metric: took 424.75µs to acquireMachinesLock for "bridge-782000"
	I0731 12:39:51.253347   10171 start.go:93] Provisioning new machine with config: &{Name:bridge-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:51.253669   10171 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:51.268184   10171 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:51.319264   10171 start.go:159] libmachine.API.Create for "bridge-782000" (driver="qemu2")
	I0731 12:39:51.319307   10171 client.go:168] LocalClient.Create starting
	I0731 12:39:51.319421   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:51.319488   10171 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:51.319507   10171 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:51.319566   10171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:51.319611   10171 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:51.319624   10171 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:51.320161   10171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:51.482225   10171 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:51.543204   10171 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:51.543209   10171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:51.543432   10171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2
	I0731 12:39:51.552617   10171 main.go:141] libmachine: STDOUT: 
	I0731 12:39:51.552645   10171 main.go:141] libmachine: STDERR: 
	I0731 12:39:51.552690   10171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2 +20000M
	I0731 12:39:51.560447   10171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:51.560472   10171 main.go:141] libmachine: STDERR: 
	I0731 12:39:51.560482   10171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2
	I0731 12:39:51.560486   10171 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:51.560497   10171 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:51.560539   10171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:38:14:45:aa:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/bridge-782000/disk.qcow2
	I0731 12:39:51.562227   10171 main.go:141] libmachine: STDOUT: 
	I0731 12:39:51.562241   10171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:51.562252   10171 client.go:171] duration metric: took 242.9435ms to LocalClient.Create
	I0731 12:39:53.564460   10171 start.go:128] duration metric: took 2.310735833s to createHost
	I0731 12:39:53.564518   10171 start.go:83] releasing machines lock for "bridge-782000", held for 2.311310042s
	W0731 12:39:53.565007   10171 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:53.575478   10171 out.go:177] 
	W0731 12:39:53.581636   10171 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:53.581662   10171 out.go:239] * 
	* 
	W0731 12:39:53.583985   10171 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:53.594527   10171 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-782000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.840083167s)

                                                
                                                
-- stdout --
	* [kubenet-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-782000" primary control-plane node in "kubenet-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:55.847316   10281 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:55.847427   10281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:55.847431   10281 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:55.847434   10281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:55.847556   10281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:39:55.848606   10281 out.go:298] Setting JSON to false
	I0731 12:39:55.864431   10281 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5964,"bootTime":1722448831,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:39:55.864505   10281 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:55.869722   10281 out.go:177] * [kubenet-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:55.877676   10281 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:39:55.877704   10281 notify.go:220] Checking for updates...
	I0731 12:39:55.883777   10281 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:39:55.886729   10281 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:55.889746   10281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:55.891268   10281 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:39:55.894690   10281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:55.898097   10281 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:55.898164   10281 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:55.898223   10281 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:55.899878   10281 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:55.906733   10281 start.go:297] selected driver: qemu2
	I0731 12:39:55.906740   10281 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:55.906747   10281 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:55.909104   10281 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:55.912528   10281 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:55.915806   10281 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:55.915825   10281 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0731 12:39:55.915859   10281 start.go:340] cluster config:
	{Name:kubenet-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:55.919593   10281 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:55.927722   10281 out.go:177] * Starting "kubenet-782000" primary control-plane node in "kubenet-782000" cluster
	I0731 12:39:55.931704   10281 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:39:55.931720   10281 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:39:55.931739   10281 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:55.931802   10281 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:55.931808   10281 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:39:55.931864   10281 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/kubenet-782000/config.json ...
	I0731 12:39:55.931874   10281 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/kubenet-782000/config.json: {Name:mkce30df3c6c75bbcd0ccfecc3c45b98f37cddd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:55.932090   10281 start.go:360] acquireMachinesLock for kubenet-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:55.932122   10281 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "kubenet-782000"
	I0731 12:39:55.932133   10281 start.go:93] Provisioning new machine with config: &{Name:kubenet-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:55.932159   10281 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:55.940705   10281 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:55.958090   10281 start.go:159] libmachine.API.Create for "kubenet-782000" (driver="qemu2")
	I0731 12:39:55.958117   10281 client.go:168] LocalClient.Create starting
	I0731 12:39:55.958177   10281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:39:55.958205   10281 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:55.958213   10281 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:55.958253   10281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:39:55.958275   10281 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:55.958283   10281 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:55.958693   10281 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:39:56.109903   10281 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:56.246705   10281 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:56.246712   10281 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:56.246957   10281 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2
	I0731 12:39:56.256430   10281 main.go:141] libmachine: STDOUT: 
	I0731 12:39:56.256445   10281 main.go:141] libmachine: STDERR: 
	I0731 12:39:56.256495   10281 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2 +20000M
	I0731 12:39:56.264241   10281 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:56.264253   10281 main.go:141] libmachine: STDERR: 
	I0731 12:39:56.264275   10281 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2
	I0731 12:39:56.264279   10281 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:56.264293   10281 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:56.264326   10281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:fc:fc:9a:ec:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2
	I0731 12:39:56.265960   10281 main.go:141] libmachine: STDOUT: 
	I0731 12:39:56.265973   10281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:56.265989   10281 client.go:171] duration metric: took 307.875ms to LocalClient.Create
	I0731 12:39:58.268125   10281 start.go:128] duration metric: took 2.335995959s to createHost
	I0731 12:39:58.268196   10281 start.go:83] releasing machines lock for "kubenet-782000", held for 2.336117416s
	W0731 12:39:58.268259   10281 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:58.278460   10281 out.go:177] * Deleting "kubenet-782000" in qemu2 ...
	W0731 12:39:58.308231   10281 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:58.308255   10281 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:03.310453   10281 start.go:360] acquireMachinesLock for kubenet-782000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:03.310899   10281 start.go:364] duration metric: took 344.917µs to acquireMachinesLock for "kubenet-782000"
	I0731 12:40:03.311023   10281 start.go:93] Provisioning new machine with config: &{Name:kubenet-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:03.311313   10281 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:03.324954   10281 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:40:03.376573   10281 start.go:159] libmachine.API.Create for "kubenet-782000" (driver="qemu2")
	I0731 12:40:03.376623   10281 client.go:168] LocalClient.Create starting
	I0731 12:40:03.376741   10281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:40:03.376800   10281 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:03.376815   10281 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:03.376890   10281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:40:03.376935   10281 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:03.376949   10281 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:03.377445   10281 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:40:03.538097   10281 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:03.598361   10281 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:03.598369   10281 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:03.598584   10281 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2
	I0731 12:40:03.607647   10281 main.go:141] libmachine: STDOUT: 
	I0731 12:40:03.607667   10281 main.go:141] libmachine: STDERR: 
	I0731 12:40:03.607717   10281 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2 +20000M
	I0731 12:40:03.615457   10281 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:03.615477   10281 main.go:141] libmachine: STDERR: 
	I0731 12:40:03.615490   10281 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2
	I0731 12:40:03.615495   10281 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:03.615505   10281 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:03.615530   10281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5c:76:f3:43:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/kubenet-782000/disk.qcow2
	I0731 12:40:03.617111   10281 main.go:141] libmachine: STDOUT: 
	I0731 12:40:03.617128   10281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:03.617140   10281 client.go:171] duration metric: took 240.518333ms to LocalClient.Create
	I0731 12:40:05.619282   10281 start.go:128] duration metric: took 2.307978625s to createHost
	I0731 12:40:05.619358   10281 start.go:83] releasing machines lock for "kubenet-782000", held for 2.308487583s
	W0731 12:40:05.619702   10281 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:05.629336   10281 out.go:177] 
	W0731 12:40:05.635367   10281 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:05.635390   10281 out.go:239] * 
	* 
	W0731 12:40:05.638044   10281 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:05.646319   10281 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-739000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-739000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.757316625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-739000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-739000" primary control-plane node in "old-k8s-version-739000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-739000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:07.838454   10391 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:07.838586   10391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:07.838589   10391 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:07.838592   10391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:07.838732   10391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:07.839833   10391 out.go:298] Setting JSON to false
	I0731 12:40:07.855855   10391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5976,"bootTime":1722448831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:40:07.855945   10391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:07.861917   10391 out.go:177] * [old-k8s-version-739000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:07.867947   10391 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:40:07.867998   10391 notify.go:220] Checking for updates...
	I0731 12:40:07.875833   10391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:40:07.878887   10391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:07.881847   10391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:07.884874   10391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:40:07.887879   10391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:07.891106   10391 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:07.891172   10391 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:07.891223   10391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:07.894832   10391 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:40:07.900878   10391 start.go:297] selected driver: qemu2
	I0731 12:40:07.900883   10391 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:40:07.900891   10391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:07.903239   10391 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:40:07.906827   10391 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:40:07.909923   10391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:07.909966   10391 cni.go:84] Creating CNI manager for ""
	I0731 12:40:07.909975   10391 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:40:07.910012   10391 start.go:340] cluster config:
	{Name:old-k8s-version-739000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-739000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:07.913750   10391 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:07.922879   10391 out.go:177] * Starting "old-k8s-version-739000" primary control-plane node in "old-k8s-version-739000" cluster
	I0731 12:40:07.926714   10391 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:40:07.926731   10391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:40:07.926742   10391 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:07.926808   10391 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:07.926822   10391 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:40:07.926895   10391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/old-k8s-version-739000/config.json ...
	I0731 12:40:07.926906   10391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/old-k8s-version-739000/config.json: {Name:mk763bcd94b23e5fdb160d66701713bb0bf50818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:40:07.927315   10391 start.go:360] acquireMachinesLock for old-k8s-version-739000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:07.927354   10391 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "old-k8s-version-739000"
	I0731 12:40:07.927365   10391 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-739000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-739000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:07.927401   10391 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:07.935675   10391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:07.953847   10391 start.go:159] libmachine.API.Create for "old-k8s-version-739000" (driver="qemu2")
	I0731 12:40:07.953879   10391 client.go:168] LocalClient.Create starting
	I0731 12:40:07.953956   10391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:40:07.953988   10391 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:07.953999   10391 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:07.954038   10391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:40:07.954061   10391 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:07.954072   10391 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:07.954509   10391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:40:08.104348   10391 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:08.157968   10391 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:08.157972   10391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:08.158182   10391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:08.167249   10391 main.go:141] libmachine: STDOUT: 
	I0731 12:40:08.167265   10391 main.go:141] libmachine: STDERR: 
	I0731 12:40:08.167305   10391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2 +20000M
	I0731 12:40:08.175073   10391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:08.175084   10391 main.go:141] libmachine: STDERR: 
	I0731 12:40:08.175093   10391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:08.175098   10391 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:08.175109   10391 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:08.175140   10391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:83:cc:ce:bf:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:08.176755   10391 main.go:141] libmachine: STDOUT: 
	I0731 12:40:08.176768   10391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:08.176784   10391 client.go:171] duration metric: took 222.90525ms to LocalClient.Create
	I0731 12:40:10.178911   10391 start.go:128] duration metric: took 2.251540084s to createHost
	I0731 12:40:10.178961   10391 start.go:83] releasing machines lock for "old-k8s-version-739000", held for 2.251648375s
	W0731 12:40:10.179036   10391 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:10.195092   10391 out.go:177] * Deleting "old-k8s-version-739000" in qemu2 ...
	W0731 12:40:10.221424   10391 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:10.221449   10391 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:15.223509   10391 start.go:360] acquireMachinesLock for old-k8s-version-739000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:15.223974   10391 start.go:364] duration metric: took 323.709µs to acquireMachinesLock for "old-k8s-version-739000"
	I0731 12:40:15.224133   10391 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-739000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-739000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:15.224415   10391 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:15.233871   10391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:15.286195   10391 start.go:159] libmachine.API.Create for "old-k8s-version-739000" (driver="qemu2")
	I0731 12:40:15.286242   10391 client.go:168] LocalClient.Create starting
	I0731 12:40:15.286357   10391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:40:15.286413   10391 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:15.286431   10391 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:15.286488   10391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:40:15.286532   10391 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:15.286548   10391 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:15.287024   10391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:40:15.449051   10391 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:15.503542   10391 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:15.503547   10391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:15.503765   10391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:15.513046   10391 main.go:141] libmachine: STDOUT: 
	I0731 12:40:15.513071   10391 main.go:141] libmachine: STDERR: 
	I0731 12:40:15.513114   10391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2 +20000M
	I0731 12:40:15.520911   10391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:15.520928   10391 main.go:141] libmachine: STDERR: 
	I0731 12:40:15.520938   10391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:15.520943   10391 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:15.520953   10391 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:15.520988   10391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:09:66:b2:d2:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:15.522554   10391 main.go:141] libmachine: STDOUT: 
	I0731 12:40:15.522574   10391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:15.522586   10391 client.go:171] duration metric: took 236.34425ms to LocalClient.Create
	I0731 12:40:17.524725   10391 start.go:128] duration metric: took 2.300331417s to createHost
	I0731 12:40:17.524821   10391 start.go:83] releasing machines lock for "old-k8s-version-739000", held for 2.30079875s
	W0731 12:40:17.525254   10391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-739000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-739000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:17.534682   10391 out.go:177] 
	W0731 12:40:17.540778   10391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:17.540802   10391 out.go:239] * 
	* 
	W0731 12:40:17.543513   10391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:17.552711   10391 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-739000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (66.315458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-739000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-739000 create -f testdata/busybox.yaml: exit status 1 (29.303458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-739000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-739000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (30.772167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (30.474292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-739000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-739000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-739000 describe deploy/metrics-server -n kube-system: exit status 1 (26.419542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-739000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-739000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (29.754459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-739000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-739000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.195125666s)

                                                
                                                
-- stdout --
	* [old-k8s-version-739000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-739000" primary control-plane node in "old-k8s-version-739000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:21.709094   10441 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:21.709238   10441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:21.709241   10441 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:21.709243   10441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:21.709385   10441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:21.710407   10441 out.go:298] Setting JSON to false
	I0731 12:40:21.726533   10441 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5990,"bootTime":1722448831,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:40:21.726601   10441 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:21.731472   10441 out.go:177] * [old-k8s-version-739000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:21.738323   10441 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:40:21.738384   10441 notify.go:220] Checking for updates...
	I0731 12:40:21.746439   10441 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:40:21.749435   10441 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:21.752404   10441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:21.755405   10441 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:40:21.756853   10441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:21.760754   10441 config.go:182] Loaded profile config "old-k8s-version-739000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:40:21.764395   10441 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:40:21.767397   10441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:21.771440   10441 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:40:21.778435   10441 start.go:297] selected driver: qemu2
	I0731 12:40:21.778441   10441 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-739000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-739000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:21.778507   10441 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:21.780820   10441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:21.780866   10441 cni.go:84] Creating CNI manager for ""
	I0731 12:40:21.780873   10441 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:40:21.780898   10441 start.go:340] cluster config:
	{Name:old-k8s-version-739000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-739000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:21.784512   10441 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:21.793464   10441 out.go:177] * Starting "old-k8s-version-739000" primary control-plane node in "old-k8s-version-739000" cluster
	I0731 12:40:21.797384   10441 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:40:21.797399   10441 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:40:21.797419   10441 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:21.797480   10441 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:21.797486   10441 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:40:21.797565   10441 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/old-k8s-version-739000/config.json ...
	I0731 12:40:21.798040   10441 start.go:360] acquireMachinesLock for old-k8s-version-739000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:21.798069   10441 start.go:364] duration metric: took 23.166µs to acquireMachinesLock for "old-k8s-version-739000"
	I0731 12:40:21.798077   10441 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:21.798084   10441 fix.go:54] fixHost starting: 
	I0731 12:40:21.798201   10441 fix.go:112] recreateIfNeeded on old-k8s-version-739000: state=Stopped err=<nil>
	W0731 12:40:21.798209   10441 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:21.802435   10441 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-739000" ...
	I0731 12:40:21.810378   10441 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:21.810417   10441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:09:66:b2:d2:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:21.812660   10441 main.go:141] libmachine: STDOUT: 
	I0731 12:40:21.812685   10441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:21.812717   10441 fix.go:56] duration metric: took 14.633042ms for fixHost
	I0731 12:40:21.812723   10441 start.go:83] releasing machines lock for "old-k8s-version-739000", held for 14.649459ms
	W0731 12:40:21.812733   10441 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:21.812786   10441 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:21.812791   10441 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:26.813591   10441 start.go:360] acquireMachinesLock for old-k8s-version-739000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:26.813939   10441 start.go:364] duration metric: took 243.541µs to acquireMachinesLock for "old-k8s-version-739000"
	I0731 12:40:26.814077   10441 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:26.814095   10441 fix.go:54] fixHost starting: 
	I0731 12:40:26.814757   10441 fix.go:112] recreateIfNeeded on old-k8s-version-739000: state=Stopped err=<nil>
	W0731 12:40:26.814785   10441 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:26.825009   10441 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-739000" ...
	I0731 12:40:26.829169   10441 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:26.829363   10441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:09:66:b2:d2:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/old-k8s-version-739000/disk.qcow2
	I0731 12:40:26.838600   10441 main.go:141] libmachine: STDOUT: 
	I0731 12:40:26.838660   10441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:26.838725   10441 fix.go:56] duration metric: took 24.628083ms for fixHost
	I0731 12:40:26.838745   10441 start.go:83] releasing machines lock for "old-k8s-version-739000", held for 24.782333ms
	W0731 12:40:26.838940   10441 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-739000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-739000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:26.847126   10441 out.go:177] 
	W0731 12:40:26.851186   10441 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:26.851231   10441 out.go:239] * 
	* 
	W0731 12:40:26.854133   10441 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:26.862141   10441 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-739000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (68.426125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-739000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (31.859958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-739000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-739000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-739000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.921125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-739000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-739000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (30.159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-739000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (29.474125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-739000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-739000 --alsologtostderr -v=1: exit status 83 (42.05925ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-739000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-739000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:27.133251   10460 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:27.133624   10460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:27.133628   10460 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:27.133631   10460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:27.133822   10460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:27.134028   10460 out.go:298] Setting JSON to false
	I0731 12:40:27.134033   10460 mustload.go:65] Loading cluster: old-k8s-version-739000
	I0731 12:40:27.134233   10460 config.go:182] Loaded profile config "old-k8s-version-739000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:40:27.138654   10460 out.go:177] * The control-plane node old-k8s-version-739000 host is not running: state=Stopped
	I0731 12:40:27.142628   10460 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-739000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-739000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (30.035708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (30.225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.809744s)

                                                
                                                
-- stdout --
	* [no-preload-421000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-421000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:27.450628   10477 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:27.450792   10477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:27.450796   10477 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:27.450799   10477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:27.450928   10477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:27.451989   10477 out.go:298] Setting JSON to false
	I0731 12:40:27.467894   10477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5996,"bootTime":1722448831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:40:27.467955   10477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:27.472576   10477 out.go:177] * [no-preload-421000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:27.480550   10477 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:40:27.480594   10477 notify.go:220] Checking for updates...
	I0731 12:40:27.487606   10477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:40:27.490564   10477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:27.493579   10477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:27.496622   10477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:40:27.498164   10477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:27.501930   10477 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:27.501989   10477 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:27.502039   10477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:27.505548   10477 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:40:27.510541   10477 start.go:297] selected driver: qemu2
	I0731 12:40:27.510546   10477 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:40:27.510553   10477 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:27.512810   10477 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:40:27.516592   10477 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:40:27.519632   10477 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:27.519665   10477 cni.go:84] Creating CNI manager for ""
	I0731 12:40:27.519671   10477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:27.519679   10477 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:40:27.519714   10477 start.go:340] cluster config:
	{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:27.523572   10477 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.528585   10477 out.go:177] * Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	I0731 12:40:27.536537   10477 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:40:27.536644   10477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/no-preload-421000/config.json ...
	I0731 12:40:27.536665   10477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/no-preload-421000/config.json: {Name:mk3de9507c054b09ce667bc498dcf0ecc7b9612d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:40:27.536657   10477 cache.go:107] acquiring lock: {Name:mk2ef30d61cd7b3b2c45707f04664ba550fd89aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536657   10477 cache.go:107] acquiring lock: {Name:mk90c3bc83976538484f2ff4064016e27c2ee231 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536667   10477 cache.go:107] acquiring lock: {Name:mk8a96a2ce038bf1e0ea9da9f9cde95c537c47a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536732   10477 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:40:27.536741   10477 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.875µs
	I0731 12:40:27.536749   10477 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:40:27.536769   10477 cache.go:107] acquiring lock: {Name:mk0dc25951d45b42b818bfd79ead6e265afaf525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536843   10477 cache.go:107] acquiring lock: {Name:mk37d24558c0d3eb70825e51a5d6c5a26033521f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536905   10477 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 12:40:27.536912   10477 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 12:40:27.536920   10477 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 12:40:27.536943   10477 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:27.536959   10477 cache.go:107] acquiring lock: {Name:mk1d419cf9cf73539dce7831cb50a605d9d90c68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536980   10477 start.go:364] duration metric: took 30.75µs to acquireMachinesLock for "no-preload-421000"
	I0731 12:40:27.536971   10477 cache.go:107] acquiring lock: {Name:mk3944f9fa2aedebf4f9bf083320195abf7a0039 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536961   10477 cache.go:107] acquiring lock: {Name:mkef9ef94b9277b407218a61575603eae4144ce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:27.536994   10477 start.go:93] Provisioning new machine with config: &{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:27.537060   10477 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:27.537194   10477 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 12:40:27.537261   10477 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 12:40:27.537531   10477 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 12:40:27.541410   10477 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:27.541996   10477 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 12:40:27.548138   10477 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 12:40:27.548226   10477 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 12:40:27.548765   10477 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 12:40:27.548790   10477 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 12:40:27.548766   10477 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 12:40:27.548854   10477 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 12:40:27.550650   10477 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 12:40:27.559483   10477 start.go:159] libmachine.API.Create for "no-preload-421000" (driver="qemu2")
	I0731 12:40:27.559500   10477 client.go:168] LocalClient.Create starting
	I0731 12:40:27.559570   10477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:40:27.559604   10477 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:27.559614   10477 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:27.559664   10477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:40:27.559688   10477 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:27.559703   10477 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:27.560094   10477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:40:27.712401   10477 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:27.792116   10477 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:27.792134   10477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:27.792377   10477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:27.802215   10477 main.go:141] libmachine: STDOUT: 
	I0731 12:40:27.802239   10477 main.go:141] libmachine: STDERR: 
	I0731 12:40:27.802299   10477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2 +20000M
	I0731 12:40:27.811316   10477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:27.811333   10477 main.go:141] libmachine: STDERR: 
	I0731 12:40:27.811347   10477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:27.811350   10477 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:27.811369   10477 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:27.811408   10477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:23:ad:fa:8a:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:27.813413   10477 main.go:141] libmachine: STDOUT: 
	I0731 12:40:27.813430   10477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:27.813449   10477 client.go:171] duration metric: took 253.949667ms to LocalClient.Create
	I0731 12:40:27.949314   10477 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0731 12:40:27.961743   10477 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 12:40:27.963194   10477 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 12:40:27.966333   10477 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 12:40:28.003265   10477 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0731 12:40:28.011144   10477 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 12:40:28.050471   10477 cache.go:162] opening:  /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 12:40:28.143640   10477 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 12:40:28.143695   10477 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 606.913458ms
	I0731 12:40:28.143727   10477 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 12:40:29.813646   10477 start.go:128] duration metric: took 2.2765845s to createHost
	I0731 12:40:29.813704   10477 start.go:83] releasing machines lock for "no-preload-421000", held for 2.276765208s
	W0731 12:40:29.813762   10477 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:29.830176   10477 out.go:177] * Deleting "no-preload-421000" in qemu2 ...
	W0731 12:40:29.860774   10477 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:29.860808   10477 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:30.326621   10477 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 12:40:30.326692   10477 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 2.790090125s
	I0731 12:40:30.326717   10477 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 12:40:31.051766   10477 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 12:40:31.051819   10477 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.515025958s
	I0731 12:40:31.051847   10477 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 12:40:31.376056   10477 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 12:40:31.376097   10477 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.839322166s
	I0731 12:40:31.376119   10477 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 12:40:32.094813   10477 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 12:40:32.094871   10477 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.55831575s
	I0731 12:40:32.094899   10477 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 12:40:32.366526   10477 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 12:40:32.366582   10477 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.82981275s
	I0731 12:40:32.366608   10477 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 12:40:34.861230   10477 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:34.861657   10477 start.go:364] duration metric: took 355.333µs to acquireMachinesLock for "no-preload-421000"
	I0731 12:40:34.861763   10477 start.go:93] Provisioning new machine with config: &{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:34.861993   10477 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:34.872663   10477 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:34.923002   10477 start.go:159] libmachine.API.Create for "no-preload-421000" (driver="qemu2")
	I0731 12:40:34.923043   10477 client.go:168] LocalClient.Create starting
	I0731 12:40:34.923162   10477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:40:34.923228   10477 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:34.923250   10477 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:34.923323   10477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:40:34.923367   10477 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:34.923383   10477 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:34.923894   10477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:40:35.085610   10477 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:35.165343   10477 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:35.165348   10477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:35.165566   10477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:35.175030   10477 main.go:141] libmachine: STDOUT: 
	I0731 12:40:35.175056   10477 main.go:141] libmachine: STDERR: 
	I0731 12:40:35.175111   10477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2 +20000M
	I0731 12:40:35.183103   10477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:35.183119   10477 main.go:141] libmachine: STDERR: 
	I0731 12:40:35.183135   10477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:35.183146   10477 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:35.183160   10477 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:35.183191   10477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:3a:cf:8b:54:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:35.185057   10477 main.go:141] libmachine: STDOUT: 
	I0731 12:40:35.185070   10477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:35.185094   10477 client.go:171] duration metric: took 262.045ms to LocalClient.Create
	I0731 12:40:35.391168   10477 cache.go:157] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 12:40:35.391205   10477 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 7.854609333s
	I0731 12:40:35.391218   10477 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 12:40:35.391257   10477 cache.go:87] Successfully saved all images to host disk.
	I0731 12:40:37.187272   10477 start.go:128] duration metric: took 2.325270666s to createHost
	I0731 12:40:37.187353   10477 start.go:83] releasing machines lock for "no-preload-421000", held for 2.325725208s
	W0731 12:40:37.187917   10477 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:37.200439   10477 out.go:177] 
	W0731 12:40:37.204369   10477 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:37.204391   10477 out.go:239] * 
	* 
	W0731 12:40:37.206848   10477 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:37.218406   10477 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (68.126042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-421000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-421000 create -f testdata/busybox.yaml: exit status 1 (29.004292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-421000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (30.768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (30.39375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-421000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-421000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-421000 describe deploy/metrics-server -n kube-system: exit status 1 (26.516583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-421000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (29.910125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.186438375s)

                                                
                                                
-- stdout --
	* [no-preload-421000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	* Restarting existing qemu2 VM for "no-preload-421000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-421000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:39.742942   10547 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:39.743070   10547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:39.743073   10547 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:39.743075   10547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:39.743238   10547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:39.744217   10547 out.go:298] Setting JSON to false
	I0731 12:40:39.760555   10547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6008,"bootTime":1722448831,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:40:39.760622   10547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:39.766045   10547 out.go:177] * [no-preload-421000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:39.773130   10547 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:40:39.773180   10547 notify.go:220] Checking for updates...
	I0731 12:40:39.780015   10547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:40:39.783070   10547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:39.785994   10547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:39.789051   10547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:40:39.792106   10547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:39.795290   10547 config.go:182] Loaded profile config "no-preload-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:40:39.795586   10547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:39.799053   10547 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:40:39.805975   10547 start.go:297] selected driver: qemu2
	I0731 12:40:39.805981   10547 start.go:901] validating driver "qemu2" against &{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:39.806041   10547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:39.808452   10547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:39.808493   10547 cni.go:84] Creating CNI manager for ""
	I0731 12:40:39.808500   10547 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:39.808524   10547 start.go:340] cluster config:
	{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-421000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:39.812277   10547 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.819916   10547 out.go:177] * Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	I0731 12:40:39.824059   10547 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:40:39.824153   10547 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/no-preload-421000/config.json ...
	I0731 12:40:39.824193   10547 cache.go:107] acquiring lock: {Name:mk2ef30d61cd7b3b2c45707f04664ba550fd89aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824191   10547 cache.go:107] acquiring lock: {Name:mk8a96a2ce038bf1e0ea9da9f9cde95c537c47a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824227   10547 cache.go:107] acquiring lock: {Name:mk90c3bc83976538484f2ff4064016e27c2ee231 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824255   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:40:39.824262   10547 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.084µs
	I0731 12:40:39.824266   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 12:40:39.824268   10547 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:40:39.824271   10547 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 83.125µs
	I0731 12:40:39.824276   10547 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 12:40:39.824281   10547 cache.go:107] acquiring lock: {Name:mk0dc25951d45b42b818bfd79ead6e265afaf525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824288   10547 cache.go:107] acquiring lock: {Name:mkef9ef94b9277b407218a61575603eae4144ce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824293   10547 cache.go:107] acquiring lock: {Name:mk1d419cf9cf73539dce7831cb50a605d9d90c68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824301   10547 cache.go:107] acquiring lock: {Name:mk3944f9fa2aedebf4f9bf083320195abf7a0039 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824306   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 12:40:39.824314   10547 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 118.458µs
	I0731 12:40:39.824346   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 12:40:39.824348   10547 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 12:40:39.824334   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 12:40:39.824361   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 12:40:39.824368   10547 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 75.917µs
	I0731 12:40:39.824372   10547 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 12:40:39.824374   10547 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 91.833µs
	I0731 12:40:39.824409   10547 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 12:40:39.824355   10547 cache.go:107] acquiring lock: {Name:mk37d24558c0d3eb70825e51a5d6c5a26033521f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:39.824386   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 12:40:39.824432   10547 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 144.333µs
	I0731 12:40:39.824437   10547 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 12:40:39.824376   10547 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 53.667µs
	I0731 12:40:39.824443   10547 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 12:40:39.824446   10547 cache.go:115] /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 12:40:39.824449   10547 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 127.25µs
	I0731 12:40:39.824453   10547 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 12:40:39.824458   10547 cache.go:87] Successfully saved all images to host disk.
	I0731 12:40:39.824615   10547 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:39.824652   10547 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "no-preload-421000"
	I0731 12:40:39.824661   10547 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:39.824666   10547 fix.go:54] fixHost starting: 
	I0731 12:40:39.824783   10547 fix.go:112] recreateIfNeeded on no-preload-421000: state=Stopped err=<nil>
	W0731 12:40:39.824792   10547 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:39.831994   10547 out.go:177] * Restarting existing qemu2 VM for "no-preload-421000" ...
	I0731 12:40:39.836062   10547 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:39.836121   10547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:3a:cf:8b:54:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:39.838194   10547 main.go:141] libmachine: STDOUT: 
	I0731 12:40:39.838217   10547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:39.838244   10547 fix.go:56] duration metric: took 13.578084ms for fixHost
	I0731 12:40:39.838254   10547 start.go:83] releasing machines lock for "no-preload-421000", held for 13.591834ms
	W0731 12:40:39.838262   10547 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:39.838289   10547 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:39.838293   10547 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:44.840388   10547 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:44.840803   10547 start.go:364] duration metric: took 336.708µs to acquireMachinesLock for "no-preload-421000"
	I0731 12:40:44.840892   10547 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:44.840917   10547 fix.go:54] fixHost starting: 
	I0731 12:40:44.841623   10547 fix.go:112] recreateIfNeeded on no-preload-421000: state=Stopped err=<nil>
	W0731 12:40:44.841650   10547 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:44.849859   10547 out.go:177] * Restarting existing qemu2 VM for "no-preload-421000" ...
	I0731 12:40:44.854966   10547 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:44.855187   10547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:3a:cf:8b:54:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/no-preload-421000/disk.qcow2
	I0731 12:40:44.864057   10547 main.go:141] libmachine: STDOUT: 
	I0731 12:40:44.864122   10547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:44.864190   10547 fix.go:56] duration metric: took 23.275375ms for fixHost
	I0731 12:40:44.864206   10547 start.go:83] releasing machines lock for "no-preload-421000", held for 23.375542ms
	W0731 12:40:44.864342   10547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:44.871912   10547 out.go:177] 
	W0731 12:40:44.875002   10547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:44.875023   10547 out.go:239] * 
	* 
	W0731 12:40:44.877789   10547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:44.886927   10547 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (68.6765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-421000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (32.843375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-421000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.601ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (30.80675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-421000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (30.3005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-421000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-421000 --alsologtostderr -v=1: exit status 83 (43.269375ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-421000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-421000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:45.161546   10569 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:45.161711   10569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:45.161717   10569 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:45.161720   10569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:45.161867   10569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:45.162100   10569 out.go:298] Setting JSON to false
	I0731 12:40:45.162106   10569 mustload.go:65] Loading cluster: no-preload-421000
	I0731 12:40:45.162293   10569 config.go:182] Loaded profile config "no-preload-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:40:45.167065   10569 out.go:177] * The control-plane node no-preload-421000 host is not running: state=Stopped
	I0731 12:40:45.171215   10569 out.go:177]   To start a cluster, run: "minikube start -p no-preload-421000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-421000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (29.8045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (30.510708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-132000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-132000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.867432625s)

                                                
                                                
-- stdout --
	* [embed-certs-132000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-132000" primary control-plane node in "embed-certs-132000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-132000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:45.481315   10586 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:45.481445   10586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:45.481449   10586 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:45.481456   10586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:45.481583   10586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:45.482676   10586 out.go:298] Setting JSON to false
	I0731 12:40:45.498689   10586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6014,"bootTime":1722448831,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:40:45.498761   10586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:45.504047   10586 out.go:177] * [embed-certs-132000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:45.511174   10586 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:40:45.511229   10586 notify.go:220] Checking for updates...
	I0731 12:40:45.517205   10586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:40:45.520198   10586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:45.523167   10586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:45.526204   10586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:40:45.529251   10586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:45.531057   10586 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:45.531131   10586 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:45.531181   10586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:45.535098   10586 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:40:45.542013   10586 start.go:297] selected driver: qemu2
	I0731 12:40:45.542020   10586 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:40:45.542029   10586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:45.544245   10586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:40:45.547203   10586 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:40:45.550252   10586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:45.550310   10586 cni.go:84] Creating CNI manager for ""
	I0731 12:40:45.550319   10586 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:45.550323   10586 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:40:45.550353   10586 start.go:340] cluster config:
	{Name:embed-certs-132000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:45.554307   10586 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:45.562171   10586 out.go:177] * Starting "embed-certs-132000" primary control-plane node in "embed-certs-132000" cluster
	I0731 12:40:45.566159   10586 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:40:45.566179   10586 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:40:45.566193   10586 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:45.566262   10586 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:45.566275   10586 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:40:45.566337   10586 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/embed-certs-132000/config.json ...
	I0731 12:40:45.566352   10586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/embed-certs-132000/config.json: {Name:mkfdd372b229099707ae5782572f08b13b3211a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:40:45.566576   10586 start.go:360] acquireMachinesLock for embed-certs-132000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:45.566619   10586 start.go:364] duration metric: took 36.833µs to acquireMachinesLock for "embed-certs-132000"
	I0731 12:40:45.566630   10586 start.go:93] Provisioning new machine with config: &{Name:embed-certs-132000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:45.566688   10586 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:45.575229   10586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:45.592456   10586 start.go:159] libmachine.API.Create for "embed-certs-132000" (driver="qemu2")
	I0731 12:40:45.592488   10586 client.go:168] LocalClient.Create starting
	I0731 12:40:45.592551   10586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:40:45.592580   10586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:45.592588   10586 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:45.592631   10586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:40:45.592657   10586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:45.592664   10586 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:45.593046   10586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:40:45.741532   10586 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:45.779608   10586 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:45.779613   10586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:45.779810   10586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:40:45.788884   10586 main.go:141] libmachine: STDOUT: 
	I0731 12:40:45.788899   10586 main.go:141] libmachine: STDERR: 
	I0731 12:40:45.788951   10586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2 +20000M
	I0731 12:40:45.796696   10586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:45.796709   10586 main.go:141] libmachine: STDERR: 
	I0731 12:40:45.796720   10586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:40:45.796726   10586 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:45.796741   10586 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:45.796766   10586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:15:08:06:8f:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:40:45.798343   10586 main.go:141] libmachine: STDOUT: 
	I0731 12:40:45.798358   10586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:45.798383   10586 client.go:171] duration metric: took 205.894667ms to LocalClient.Create
	I0731 12:40:47.800505   10586 start.go:128] duration metric: took 2.233844875s to createHost
	I0731 12:40:47.800561   10586 start.go:83] releasing machines lock for "embed-certs-132000", held for 2.233982791s
	W0731 12:40:47.800623   10586 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:47.810533   10586 out.go:177] * Deleting "embed-certs-132000" in qemu2 ...
	W0731 12:40:47.840626   10586 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:47.840652   10586 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:52.842704   10586 start.go:360] acquireMachinesLock for embed-certs-132000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:52.843208   10586 start.go:364] duration metric: took 424.292µs to acquireMachinesLock for "embed-certs-132000"
	I0731 12:40:52.843321   10586 start.go:93] Provisioning new machine with config: &{Name:embed-certs-132000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:52.843636   10586 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:52.853047   10586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:52.904878   10586 start.go:159] libmachine.API.Create for "embed-certs-132000" (driver="qemu2")
	I0731 12:40:52.904931   10586 client.go:168] LocalClient.Create starting
	I0731 12:40:52.905033   10586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:40:52.905097   10586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:52.905114   10586 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:52.905178   10586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:40:52.905222   10586 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:52.905234   10586 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:52.905822   10586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:40:53.067664   10586 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:53.247366   10586 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:53.247373   10586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:53.247601   10586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:40:53.257243   10586 main.go:141] libmachine: STDOUT: 
	I0731 12:40:53.257261   10586 main.go:141] libmachine: STDERR: 
	I0731 12:40:53.257304   10586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2 +20000M
	I0731 12:40:53.265068   10586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:53.265082   10586 main.go:141] libmachine: STDERR: 
	I0731 12:40:53.265090   10586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:40:53.265094   10586 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:53.265103   10586 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:53.265144   10586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:27:e2:dd:cf:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:40:53.266755   10586 main.go:141] libmachine: STDOUT: 
	I0731 12:40:53.266769   10586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:53.266791   10586 client.go:171] duration metric: took 361.852167ms to LocalClient.Create
	I0731 12:40:55.268969   10586 start.go:128] duration metric: took 2.425355792s to createHost
	I0731 12:40:55.269062   10586 start.go:83] releasing machines lock for "embed-certs-132000", held for 2.425859458s
	W0731 12:40:55.269361   10586 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-132000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-132000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:55.284957   10586 out.go:177] 
	W0731 12:40:55.292657   10586 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:55.292683   10586 out.go:239] * 
	* 
	W0731 12:40:55.295071   10586 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:55.307093   10586 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-132000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (65.139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-132000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-132000 create -f testdata/busybox.yaml: exit status 1 (29.469ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-132000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-132000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (30.568333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (29.311333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-132000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-132000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-132000 describe deploy/metrics-server -n kube-system: exit status 1 (28.220459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-132000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-132000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (30.948417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-132000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-132000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.173033291s)

                                                
                                                
-- stdout --
	* [embed-certs-132000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-132000" primary control-plane node in "embed-certs-132000" cluster
	* Restarting existing qemu2 VM for "embed-certs-132000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-132000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:59.390065   10637 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:59.390203   10637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:59.390207   10637 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:59.390209   10637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:59.390345   10637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:40:59.391345   10637 out.go:298] Setting JSON to false
	I0731 12:40:59.407557   10637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6028,"bootTime":1722448831,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:40:59.407620   10637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:59.411418   10637 out.go:177] * [embed-certs-132000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:59.418307   10637 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:40:59.418336   10637 notify.go:220] Checking for updates...
	I0731 12:40:59.425238   10637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:40:59.429253   10637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:59.432321   10637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:59.435285   10637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:40:59.438212   10637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:59.441613   10637 config.go:182] Loaded profile config "embed-certs-132000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:59.441879   10637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:59.445281   10637 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:40:59.452254   10637 start.go:297] selected driver: qemu2
	I0731 12:40:59.452260   10637 start.go:901] validating driver "qemu2" against &{Name:embed-certs-132000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:59.452305   10637 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:59.454507   10637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:59.454529   10637 cni.go:84] Creating CNI manager for ""
	I0731 12:40:59.454539   10637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:59.454562   10637 start.go:340] cluster config:
	{Name:embed-certs-132000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-132000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:59.457975   10637 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:59.466242   10637 out.go:177] * Starting "embed-certs-132000" primary control-plane node in "embed-certs-132000" cluster
	I0731 12:40:59.469223   10637 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:40:59.469242   10637 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:40:59.469255   10637 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:59.469313   10637 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:59.469321   10637 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:40:59.469388   10637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/embed-certs-132000/config.json ...
	I0731 12:40:59.469909   10637 start.go:360] acquireMachinesLock for embed-certs-132000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:59.469937   10637 start.go:364] duration metric: took 22.125µs to acquireMachinesLock for "embed-certs-132000"
	I0731 12:40:59.469945   10637 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:59.469952   10637 fix.go:54] fixHost starting: 
	I0731 12:40:59.470072   10637 fix.go:112] recreateIfNeeded on embed-certs-132000: state=Stopped err=<nil>
	W0731 12:40:59.470080   10637 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:59.474293   10637 out.go:177] * Restarting existing qemu2 VM for "embed-certs-132000" ...
	I0731 12:40:59.481261   10637 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:59.481304   10637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:27:e2:dd:cf:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:40:59.483382   10637 main.go:141] libmachine: STDOUT: 
	I0731 12:40:59.483403   10637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:59.483440   10637 fix.go:56] duration metric: took 13.490541ms for fixHost
	I0731 12:40:59.483445   10637 start.go:83] releasing machines lock for "embed-certs-132000", held for 13.504167ms
	W0731 12:40:59.483452   10637 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:59.483482   10637 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:59.483487   10637 start.go:729] Will try again in 5 seconds ...
	I0731 12:41:04.485542   10637 start.go:360] acquireMachinesLock for embed-certs-132000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:04.485882   10637 start.go:364] duration metric: took 258.125µs to acquireMachinesLock for "embed-certs-132000"
	I0731 12:41:04.485949   10637 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:41:04.485959   10637 fix.go:54] fixHost starting: 
	I0731 12:41:04.486361   10637 fix.go:112] recreateIfNeeded on embed-certs-132000: state=Stopped err=<nil>
	W0731 12:41:04.486376   10637 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:41:04.490878   10637 out.go:177] * Restarting existing qemu2 VM for "embed-certs-132000" ...
	I0731 12:41:04.494769   10637 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:04.494941   10637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:27:e2:dd:cf:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/embed-certs-132000/disk.qcow2
	I0731 12:41:04.500385   10637 main.go:141] libmachine: STDOUT: 
	I0731 12:41:04.500427   10637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:04.500478   10637 fix.go:56] duration metric: took 14.518333ms for fixHost
	I0731 12:41:04.500488   10637 start.go:83] releasing machines lock for "embed-certs-132000", held for 14.594417ms
	W0731 12:41:04.500631   10637 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-132000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-132000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:04.509828   10637 out.go:177] 
	W0731 12:41:04.511215   10637 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:41:04.511233   10637 out.go:239] * 
	* 
	W0731 12:41:04.512570   10637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:41:04.522813   10637 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-132000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (65.816375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-132000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (33.620167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-132000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-132000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-132000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.961625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-132000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-132000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (28.930375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-132000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (28.849625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-132000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-132000 --alsologtostderr -v=1: exit status 83 (41.881666ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-132000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-132000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:41:04.787775   10666 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:41:04.787927   10666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:04.787930   10666 out.go:304] Setting ErrFile to fd 2...
	I0731 12:41:04.787932   10666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:04.788102   10666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:41:04.788319   10666 out.go:298] Setting JSON to false
	I0731 12:41:04.788325   10666 mustload.go:65] Loading cluster: embed-certs-132000
	I0731 12:41:04.788542   10666 config.go:182] Loaded profile config "embed-certs-132000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:41:04.793321   10666 out.go:177] * The control-plane node embed-certs-132000 host is not running: state=Stopped
	I0731 12:41:04.797140   10666 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-132000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-132000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (28.300625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (27.165875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.901436417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-321000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:41:05.205814   10690 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:41:05.205950   10690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:05.205953   10690 out.go:304] Setting ErrFile to fd 2...
	I0731 12:41:05.205955   10690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:05.206082   10690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:41:05.207146   10690 out.go:298] Setting JSON to false
	I0731 12:41:05.223351   10690 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6034,"bootTime":1722448831,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:41:05.223415   10690 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:41:05.228229   10690 out.go:177] * [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:41:05.235391   10690 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:41:05.235417   10690 notify.go:220] Checking for updates...
	I0731 12:41:05.242229   10690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:41:05.246320   10690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:41:05.249277   10690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:41:05.252259   10690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:41:05.255276   10690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:41:05.258676   10690 config.go:182] Loaded profile config "cert-expiration-505000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:41:05.258746   10690 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:41:05.258796   10690 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:41:05.262216   10690 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:41:05.269200   10690 start.go:297] selected driver: qemu2
	I0731 12:41:05.269207   10690 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:41:05.269215   10690 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:41:05.271483   10690 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:41:05.274257   10690 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:41:05.278366   10690 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:41:05.278410   10690 cni.go:84] Creating CNI manager for ""
	I0731 12:41:05.278423   10690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:41:05.278427   10690 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:41:05.278457   10690 start.go:340] cluster config:
	{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:41:05.282408   10690 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:41:05.291275   10690 out.go:177] * Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	I0731 12:41:05.295257   10690 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:41:05.295279   10690 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:41:05.295297   10690 cache.go:56] Caching tarball of preloaded images
	I0731 12:41:05.295360   10690 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:41:05.295373   10690 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:41:05.295430   10690 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/default-k8s-diff-port-321000/config.json ...
	I0731 12:41:05.295446   10690 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/default-k8s-diff-port-321000/config.json: {Name:mkd3821dc8ed597bdabe29f58f3820d2c17a6769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:41:05.295666   10690 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:05.295701   10690 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0731 12:41:05.295713   10690 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:41:05.295744   10690 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:41:05.304236   10690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:41:05.322607   10690 start.go:159] libmachine.API.Create for "default-k8s-diff-port-321000" (driver="qemu2")
	I0731 12:41:05.322641   10690 client.go:168] LocalClient.Create starting
	I0731 12:41:05.322715   10690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:41:05.322747   10690 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:05.322755   10690 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:05.322792   10690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:41:05.322817   10690 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:05.322828   10690 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:05.323175   10690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:41:05.472861   10690 main.go:141] libmachine: Creating SSH key...
	I0731 12:41:05.534195   10690 main.go:141] libmachine: Creating Disk image...
	I0731 12:41:05.534200   10690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:41:05.534401   10690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:05.543593   10690 main.go:141] libmachine: STDOUT: 
	I0731 12:41:05.543608   10690 main.go:141] libmachine: STDERR: 
	I0731 12:41:05.543650   10690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2 +20000M
	I0731 12:41:05.551516   10690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:41:05.551533   10690 main.go:141] libmachine: STDERR: 
	I0731 12:41:05.551546   10690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:05.551553   10690 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:41:05.551563   10690 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:05.551591   10690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:10:11:11:3f:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:05.553260   10690 main.go:141] libmachine: STDOUT: 
	I0731 12:41:05.553275   10690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:05.553303   10690 client.go:171] duration metric: took 230.6615ms to LocalClient.Create
	I0731 12:41:07.555480   10690 start.go:128] duration metric: took 2.259748792s to createHost
	I0731 12:41:07.555585   10690 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 2.259883875s
	W0731 12:41:07.555659   10690 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:07.566690   10690 out.go:177] * Deleting "default-k8s-diff-port-321000" in qemu2 ...
	W0731 12:41:07.596921   10690 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:07.596948   10690 start.go:729] Will try again in 5 seconds ...
	I0731 12:41:12.599076   10690 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:12.599521   10690 start.go:364] duration metric: took 365.583µs to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0731 12:41:12.599649   10690 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:41:12.599924   10690 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:41:12.609437   10690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:41:12.659569   10690 start.go:159] libmachine.API.Create for "default-k8s-diff-port-321000" (driver="qemu2")
	I0731 12:41:12.659619   10690 client.go:168] LocalClient.Create starting
	I0731 12:41:12.659736   10690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:41:12.659802   10690 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:12.659819   10690 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:12.659893   10690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:41:12.659938   10690 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:12.659952   10690 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:12.661180   10690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:41:12.828190   10690 main.go:141] libmachine: Creating SSH key...
	I0731 12:41:13.013293   10690 main.go:141] libmachine: Creating Disk image...
	I0731 12:41:13.013299   10690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:41:13.013555   10690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:13.023197   10690 main.go:141] libmachine: STDOUT: 
	I0731 12:41:13.023216   10690 main.go:141] libmachine: STDERR: 
	I0731 12:41:13.023270   10690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2 +20000M
	I0731 12:41:13.031196   10690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:41:13.031212   10690 main.go:141] libmachine: STDERR: 
	I0731 12:41:13.031226   10690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:13.031231   10690 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:41:13.031241   10690 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:13.031278   10690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c5:b5:6a:cc:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:13.032995   10690 main.go:141] libmachine: STDOUT: 
	I0731 12:41:13.033011   10690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:13.033024   10690 client.go:171] duration metric: took 373.406708ms to LocalClient.Create
	I0731 12:41:15.035156   10690 start.go:128] duration metric: took 2.435250625s to createHost
	I0731 12:41:15.035214   10690 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 2.435719292s
	W0731 12:41:15.035660   10690 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:15.044163   10690 out.go:177] 
	W0731 12:41:15.050290   10690 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:41:15.050336   10690 out.go:239] * 
	* 
	W0731 12:41:15.052999   10690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:41:15.065260   10690 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (64.695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-949000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-949000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.783826917s)

                                                
                                                
-- stdout --
	* [newest-cni-949000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-949000" primary control-plane node in "newest-cni-949000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-949000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:41:09.045062   10706 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:41:09.045196   10706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:09.045199   10706 out.go:304] Setting ErrFile to fd 2...
	I0731 12:41:09.045202   10706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:09.045332   10706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:41:09.046345   10706 out.go:298] Setting JSON to false
	I0731 12:41:09.062619   10706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6038,"bootTime":1722448831,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:41:09.062680   10706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:41:09.068245   10706 out.go:177] * [newest-cni-949000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:41:09.074926   10706 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:41:09.075020   10706 notify.go:220] Checking for updates...
	I0731 12:41:09.083947   10706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:41:09.087950   10706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:41:09.091989   10706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:41:09.093365   10706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:41:09.095933   10706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:41:09.099264   10706 config.go:182] Loaded profile config "default-k8s-diff-port-321000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:41:09.099338   10706 config.go:182] Loaded profile config "multinode-810000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:41:09.099393   10706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:41:09.100896   10706 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:41:09.107956   10706 start.go:297] selected driver: qemu2
	I0731 12:41:09.107961   10706 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:41:09.107968   10706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:41:09.110226   10706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 12:41:09.110255   10706 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 12:41:09.114975   10706 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:41:09.118044   10706 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 12:41:09.118057   10706 cni.go:84] Creating CNI manager for ""
	I0731 12:41:09.118066   10706 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:41:09.118070   10706 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:41:09.118098   10706 start.go:340] cluster config:
	{Name:newest-cni-949000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-949000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:41:09.121796   10706 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:41:09.129949   10706 out.go:177] * Starting "newest-cni-949000" primary control-plane node in "newest-cni-949000" cluster
	I0731 12:41:09.134029   10706 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:41:09.134048   10706 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:41:09.134060   10706 cache.go:56] Caching tarball of preloaded images
	I0731 12:41:09.134129   10706 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:41:09.134135   10706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:41:09.134204   10706 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/newest-cni-949000/config.json ...
	I0731 12:41:09.134221   10706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/newest-cni-949000/config.json: {Name:mkdd0f94bdb84e8c63d0faebbfb25b61cdec4bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:41:09.134446   10706 start.go:360] acquireMachinesLock for newest-cni-949000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:09.134481   10706 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "newest-cni-949000"
	I0731 12:41:09.134492   10706 start.go:93] Provisioning new machine with config: &{Name:newest-cni-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-949000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:41:09.134526   10706 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:41:09.141972   10706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:41:09.160302   10706 start.go:159] libmachine.API.Create for "newest-cni-949000" (driver="qemu2")
	I0731 12:41:09.160332   10706 client.go:168] LocalClient.Create starting
	I0731 12:41:09.160397   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:41:09.160430   10706 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:09.160440   10706 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:09.160484   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:41:09.160510   10706 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:09.160517   10706 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:09.160883   10706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:41:09.309416   10706 main.go:141] libmachine: Creating SSH key...
	I0731 12:41:09.378799   10706 main.go:141] libmachine: Creating Disk image...
	I0731 12:41:09.378804   10706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:41:09.379022   10706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:09.388073   10706 main.go:141] libmachine: STDOUT: 
	I0731 12:41:09.388091   10706 main.go:141] libmachine: STDERR: 
	I0731 12:41:09.388146   10706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2 +20000M
	I0731 12:41:09.395896   10706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:41:09.395909   10706 main.go:141] libmachine: STDERR: 
	I0731 12:41:09.395920   10706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:09.395924   10706 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:41:09.395940   10706 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:09.395974   10706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:d0:0f:68:37:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:09.397556   10706 main.go:141] libmachine: STDOUT: 
	I0731 12:41:09.397572   10706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:09.397590   10706 client.go:171] duration metric: took 237.257375ms to LocalClient.Create
	I0731 12:41:11.399792   10706 start.go:128] duration metric: took 2.2652995s to createHost
	I0731 12:41:11.399834   10706 start.go:83] releasing machines lock for "newest-cni-949000", held for 2.265394292s
	W0731 12:41:11.399893   10706 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:11.407046   10706 out.go:177] * Deleting "newest-cni-949000" in qemu2 ...
	W0731 12:41:11.433757   10706 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:11.433786   10706 start.go:729] Will try again in 5 seconds ...
	I0731 12:41:16.435842   10706 start.go:360] acquireMachinesLock for newest-cni-949000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:16.436301   10706 start.go:364] duration metric: took 367.625µs to acquireMachinesLock for "newest-cni-949000"
	I0731 12:41:16.436518   10706 start.go:93] Provisioning new machine with config: &{Name:newest-cni-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-949000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:41:16.436864   10706 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:41:16.446543   10706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:41:16.497296   10706 start.go:159] libmachine.API.Create for "newest-cni-949000" (driver="qemu2")
	I0731 12:41:16.497347   10706 client.go:168] LocalClient.Create starting
	I0731 12:41:16.497461   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/ca.pem
	I0731 12:41:16.497506   10706 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:16.497536   10706 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:16.497598   10706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-6578/.minikube/certs/cert.pem
	I0731 12:41:16.497627   10706 main.go:141] libmachine: Decoding PEM data...
	I0731 12:41:16.497638   10706 main.go:141] libmachine: Parsing certificate...
	I0731 12:41:16.498166   10706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:41:16.658573   10706 main.go:141] libmachine: Creating SSH key...
	I0731 12:41:16.735138   10706 main.go:141] libmachine: Creating Disk image...
	I0731 12:41:16.735149   10706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:41:16.735447   10706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2.raw /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:16.744362   10706 main.go:141] libmachine: STDOUT: 
	I0731 12:41:16.744476   10706 main.go:141] libmachine: STDERR: 
	I0731 12:41:16.744525   10706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2 +20000M
	I0731 12:41:16.752393   10706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:41:16.752408   10706 main.go:141] libmachine: STDERR: 
	I0731 12:41:16.752427   10706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:16.752432   10706 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:41:16.752441   10706 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:16.752473   10706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:92:eb:20:f6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:16.753968   10706 main.go:141] libmachine: STDOUT: 
	I0731 12:41:16.753980   10706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:16.753992   10706 client.go:171] duration metric: took 256.646417ms to LocalClient.Create
	I0731 12:41:18.756136   10706 start.go:128] duration metric: took 2.319292458s to createHost
	I0731 12:41:18.756185   10706 start.go:83] releasing machines lock for "newest-cni-949000", held for 2.319871959s
	W0731 12:41:18.756531   10706 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-949000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-949000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:18.770161   10706 out.go:177] 
	W0731 12:41:18.777161   10706 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:41:18.777195   10706 out.go:239] * 
	* 
	W0731 12:41:18.779629   10706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:41:18.788112   10706 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-949000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000: exit status 7 (65.147292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-321000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321000 create -f testdata/busybox.yaml: exit status 1 (30.547666ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-321000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-321000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (29.302625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (28.048166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-321000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-321000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321000 describe deploy/metrics-server -n kube-system: exit status 1 (27.005583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-321000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-321000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (28.85225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.306768791s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:41:17.576226   10752 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:41:17.576359   10752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:17.576362   10752 out.go:304] Setting ErrFile to fd 2...
	I0731 12:41:17.576365   10752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:17.576499   10752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:41:17.577515   10752 out.go:298] Setting JSON to false
	I0731 12:41:17.593825   10752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6046,"bootTime":1722448831,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:41:17.593902   10752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:41:17.598186   10752 out.go:177] * [default-k8s-diff-port-321000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:41:17.605156   10752 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:41:17.605209   10752 notify.go:220] Checking for updates...
	I0731 12:41:17.612018   10752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:41:17.616108   10752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:41:17.619132   10752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:41:17.622030   10752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:41:17.625043   10752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:41:17.628369   10752 config.go:182] Loaded profile config "default-k8s-diff-port-321000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:41:17.628632   10752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:41:17.632029   10752 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:41:17.639088   10752 start.go:297] selected driver: qemu2
	I0731 12:41:17.639096   10752 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:41:17.639165   10752 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:41:17.641667   10752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:41:17.641740   10752 cni.go:84] Creating CNI manager for ""
	I0731 12:41:17.641748   10752 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:41:17.641771   10752 start.go:340] cluster config:
	{Name:default-k8s-diff-port-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-321000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:41:17.645513   10752 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:41:17.654070   10752 out.go:177] * Starting "default-k8s-diff-port-321000" primary control-plane node in "default-k8s-diff-port-321000" cluster
	I0731 12:41:17.657010   10752 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:41:17.657028   10752 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:41:17.657037   10752 cache.go:56] Caching tarball of preloaded images
	I0731 12:41:17.657087   10752 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:41:17.657092   10752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:41:17.657146   10752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/default-k8s-diff-port-321000/config.json ...
	I0731 12:41:17.657663   10752 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:18.756330   10752 start.go:364] duration metric: took 1.098621959s to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0731 12:41:18.756476   10752 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:41:18.756524   10752 fix.go:54] fixHost starting: 
	I0731 12:41:18.757222   10752 fix.go:112] recreateIfNeeded on default-k8s-diff-port-321000: state=Stopped err=<nil>
	W0731 12:41:18.757270   10752 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:41:18.773982   10752 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	I0731 12:41:18.781172   10752 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:18.781350   10752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c5:b5:6a:cc:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:18.792069   10752 main.go:141] libmachine: STDOUT: 
	I0731 12:41:18.792170   10752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:18.792318   10752 fix.go:56] duration metric: took 35.806958ms for fixHost
	I0731 12:41:18.792337   10752 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 35.967083ms
	W0731 12:41:18.792376   10752 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:41:18.792523   10752 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:18.792543   10752 start.go:729] Will try again in 5 seconds ...
	I0731 12:41:23.794654   10752 start.go:360] acquireMachinesLock for default-k8s-diff-port-321000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:23.795144   10752 start.go:364] duration metric: took 353.709µs to acquireMachinesLock for "default-k8s-diff-port-321000"
	I0731 12:41:23.795258   10752 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:41:23.795279   10752 fix.go:54] fixHost starting: 
	I0731 12:41:23.796017   10752 fix.go:112] recreateIfNeeded on default-k8s-diff-port-321000: state=Stopped err=<nil>
	W0731 12:41:23.796045   10752 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:41:23.801405   10752 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-321000" ...
	I0731 12:41:23.809521   10752 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:23.809814   10752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c5:b5:6a:cc:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/default-k8s-diff-port-321000/disk.qcow2
	I0731 12:41:23.818645   10752 main.go:141] libmachine: STDOUT: 
	I0731 12:41:23.818699   10752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:23.818803   10752 fix.go:56] duration metric: took 23.522041ms for fixHost
	I0731 12:41:23.818819   10752 start.go:83] releasing machines lock for "default-k8s-diff-port-321000", held for 23.653875ms
	W0731 12:41:23.818986   10752 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-321000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:23.826537   10752 out.go:177] 
	W0731 12:41:23.830570   10752 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:41:23.830599   10752 out.go:239] * 
	* 
	W0731 12:41:23.833278   10752 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:41:23.841543   10752 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-321000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (66.652792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-949000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-949000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.183130125s)

                                                
                                                
-- stdout --
	* [newest-cni-949000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-949000" primary control-plane node in "newest-cni-949000" cluster
	* Restarting existing qemu2 VM for "newest-cni-949000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-949000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:41:22.675571   10785 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:41:22.675691   10785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:22.675694   10785 out.go:304] Setting ErrFile to fd 2...
	I0731 12:41:22.675697   10785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:22.675830   10785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:41:22.676807   10785 out.go:298] Setting JSON to false
	I0731 12:41:22.693133   10785 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6051,"bootTime":1722448831,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:41:22.693221   10785 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:41:22.698216   10785 out.go:177] * [newest-cni-949000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:41:22.705171   10785 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:41:22.705210   10785 notify.go:220] Checking for updates...
	I0731 12:41:22.712240   10785 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:41:22.715237   10785 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:41:22.718196   10785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:41:22.721240   10785 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:41:22.724207   10785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:41:22.727420   10785 config.go:182] Loaded profile config "newest-cni-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:41:22.727683   10785 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:41:22.731231   10785 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:41:22.738166   10785 start.go:297] selected driver: qemu2
	I0731 12:41:22.738171   10785 start.go:901] validating driver "qemu2" against &{Name:newest-cni-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-949000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:41:22.738215   10785 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:41:22.740563   10785 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 12:41:22.740597   10785 cni.go:84] Creating CNI manager for ""
	I0731 12:41:22.740603   10785 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:41:22.740626   10785 start.go:340] cluster config:
	{Name:newest-cni-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-949000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:41:22.744174   10785 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:41:22.752296   10785 out.go:177] * Starting "newest-cni-949000" primary control-plane node in "newest-cni-949000" cluster
	I0731 12:41:22.757276   10785 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:41:22.757293   10785 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:41:22.757306   10785 cache.go:56] Caching tarball of preloaded images
	I0731 12:41:22.757374   10785 preload.go:172] Found /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:41:22.757382   10785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:41:22.757446   10785 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/newest-cni-949000/config.json ...
	I0731 12:41:22.757927   10785 start.go:360] acquireMachinesLock for newest-cni-949000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:22.757957   10785 start.go:364] duration metric: took 23.541µs to acquireMachinesLock for "newest-cni-949000"
	I0731 12:41:22.757969   10785 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:41:22.757975   10785 fix.go:54] fixHost starting: 
	I0731 12:41:22.758095   10785 fix.go:112] recreateIfNeeded on newest-cni-949000: state=Stopped err=<nil>
	W0731 12:41:22.758103   10785 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:41:22.762158   10785 out.go:177] * Restarting existing qemu2 VM for "newest-cni-949000" ...
	I0731 12:41:22.769134   10785 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:22.769174   10785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:92:eb:20:f6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:22.771312   10785 main.go:141] libmachine: STDOUT: 
	I0731 12:41:22.771331   10785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:22.771361   10785 fix.go:56] duration metric: took 13.386708ms for fixHost
	I0731 12:41:22.771365   10785 start.go:83] releasing machines lock for "newest-cni-949000", held for 13.403958ms
	W0731 12:41:22.771373   10785 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:41:22.771410   10785 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:22.771415   10785 start.go:729] Will try again in 5 seconds ...
	I0731 12:41:27.773591   10785 start.go:360] acquireMachinesLock for newest-cni-949000: {Name:mkeefb7d79700223b08e90d7849b1498b0672b05 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:41:27.774010   10785 start.go:364] duration metric: took 315.084µs to acquireMachinesLock for "newest-cni-949000"
	I0731 12:41:27.774158   10785 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:41:27.774178   10785 fix.go:54] fixHost starting: 
	I0731 12:41:27.774973   10785 fix.go:112] recreateIfNeeded on newest-cni-949000: state=Stopped err=<nil>
	W0731 12:41:27.775005   10785 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:41:27.780440   10785 out.go:177] * Restarting existing qemu2 VM for "newest-cni-949000" ...
	I0731 12:41:27.785368   10785 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:41:27.785635   10785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:92:eb:20:f6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19360-6578/.minikube/machines/newest-cni-949000/disk.qcow2
	I0731 12:41:27.795305   10785 main.go:141] libmachine: STDOUT: 
	I0731 12:41:27.795359   10785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:41:27.795442   10785 fix.go:56] duration metric: took 21.267875ms for fixHost
	I0731 12:41:27.795460   10785 start.go:83] releasing machines lock for "newest-cni-949000", held for 21.427708ms
	W0731 12:41:27.795659   10785 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-949000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:41:27.803420   10785 out.go:177] 
	W0731 12:41:27.807452   10785 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:41:27.807474   10785 out.go:239] * 
	* 
	W0731 12:41:27.810019   10785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:41:27.818407   10785 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-949000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000: exit status 7 (68.064625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-321000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (31.666375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-321000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-321000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-321000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.501459ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-321000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-321000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (28.877209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-321000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (28.333084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-321000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-321000 --alsologtostderr -v=1: exit status 83 (40.118084ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-321000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-321000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:41:24.106270   10804 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:41:24.106447   10804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:24.106450   10804 out.go:304] Setting ErrFile to fd 2...
	I0731 12:41:24.106453   10804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:24.106593   10804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:41:24.106820   10804 out.go:298] Setting JSON to false
	I0731 12:41:24.106825   10804 mustload.go:65] Loading cluster: default-k8s-diff-port-321000
	I0731 12:41:24.107027   10804 config.go:182] Loaded profile config "default-k8s-diff-port-321000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:41:24.111929   10804 out.go:177] * The control-plane node default-k8s-diff-port-321000 host is not running: state=Stopped
	I0731 12:41:24.114832   10804 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-321000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-321000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (29.106917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (28.604209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-321000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-949000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000: exit status 7 (30.082375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-949000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-949000 --alsologtostderr -v=1: exit status 83 (42.30175ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-949000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-949000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:41:28.000036   10830 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:41:28.000197   10830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:28.000201   10830 out.go:304] Setting ErrFile to fd 2...
	I0731 12:41:28.000203   10830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:41:28.000357   10830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:41:28.000580   10830 out.go:298] Setting JSON to false
	I0731 12:41:28.000586   10830 mustload.go:65] Loading cluster: newest-cni-949000
	I0731 12:41:28.000814   10830 config.go:182] Loaded profile config "newest-cni-949000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:41:28.005005   10830 out.go:177] * The control-plane node newest-cni-949000 host is not running: state=Stopped
	I0731 12:41:28.008771   10830 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-949000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-949000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000: exit status 7 (29.299167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-949000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000: exit status 7 (30.153667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-949000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 10.05
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 16.34
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.29
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.45
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 8.45
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.66
64 TestFunctional/serial/CacheCmd/cache/add_local 1.04
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.2
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.78
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.12
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
247 TestStoppedBinaryUpgrade/Setup 0.95
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
267 TestNoKubernetes/serial/ProfileList 0.1
268 TestNoKubernetes/serial/Stop 1.83
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
284 TestStartStop/group/old-k8s-version/serial/Stop 3.71
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
295 TestStartStop/group/no-preload/serial/Stop 2.08
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
306 TestStartStop/group/embed-certs/serial/Stop 3.64
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.08
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
324 TestStartStop/group/newest-cni/serial/Stop 3.6
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-537000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-537000: exit status 85 (95.765292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |          |
	|         | -p download-only-537000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:14:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:14:43.236476    7070 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:14:43.236636    7070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:43.236640    7070 out.go:304] Setting ErrFile to fd 2...
	I0731 12:14:43.236642    7070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:43.236753    7070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	W0731 12:14:43.236840    7070 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19360-6578/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19360-6578/.minikube/config/config.json: no such file or directory
	I0731 12:14:43.238127    7070 out.go:298] Setting JSON to true
	I0731 12:14:43.254261    7070 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4452,"bootTime":1722448831,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:14:43.254336    7070 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:14:43.260052    7070 out.go:97] [download-only-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:14:43.260209    7070 notify.go:220] Checking for updates...
	W0731 12:14:43.260273    7070 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 12:14:43.263780    7070 out.go:169] MINIKUBE_LOCATION=19360
	I0731 12:14:43.267036    7070 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:14:43.271999    7070 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:14:43.274926    7070 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:14:43.278033    7070 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	W0731 12:14:43.282398    7070 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:14:43.282593    7070 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:14:43.285971    7070 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:14:43.285992    7070 start.go:297] selected driver: qemu2
	I0731 12:14:43.285994    7070 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:14:43.286062    7070 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:14:43.289028    7070 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:14:43.294191    7070 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:14:43.294313    7070 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:14:43.294327    7070 cni.go:84] Creating CNI manager for ""
	I0731 12:14:43.294344    7070 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:14:43.294395    7070 start.go:340] cluster config:
	{Name:download-only-537000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:14:43.298096    7070 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:14:43.301998    7070 out.go:97] Downloading VM boot image ...
	I0731 12:14:43.302012    7070 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 12:14:54.013201    7070 out.go:97] Starting "download-only-537000" primary control-plane node in "download-only-537000" cluster
	I0731 12:14:54.013225    7070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:54.079023    7070 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:54.079032    7070 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:54.079889    7070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:54.085131    7070 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 12:14:54.085139    7070 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:54.167943    7070 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:15:01.338286    7070 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:01.338444    7070 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:02.033226    7070 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:15:02.033427    7070 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-537000/config.json ...
	I0731 12:15:02.033444    7070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-537000/config.json: {Name:mk119f50d348b283632d10c30f43558feb9f07f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:15:02.033659    7070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:15:02.034485    7070 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 12:15:02.414392    7070 out.go:169] 
	W0731 12:15:02.418570    7070 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80 0x104559a80] Decompressors:map[bz2:0x140007cb300 gz:0x140007cb308 tar:0x140007cb2b0 tar.bz2:0x140007cb2c0 tar.gz:0x140007cb2d0 tar.xz:0x140007cb2e0 tar.zst:0x140007cb2f0 tbz2:0x140007cb2c0 tgz:0x140007cb2d0 txz:0x140007cb2e0 tzst:0x140007cb2f0 xz:0x140007cb310 zip:0x140007cb320 zst:0x140007cb318] Getters:map[file:0x140017846d0 http:0x140000b4cd0 https:0x140000b4d20] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 12:15:02.418591    7070 out_reason.go:110] 
	W0731 12:15:02.427314    7070 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:15:02.431461    7070 out.go:169] 
	
	
	* The control-plane node download-only-537000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-537000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-537000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (10.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-207000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-207000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (10.047752208s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (10.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-207000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-207000: exit status 85 (79.385834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-537000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-537000        | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | -o=json --download-only        | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | -p download-only-207000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:15:02
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:15:02.843935    7094 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:15:02.844052    7094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:02.844056    7094 out.go:304] Setting ErrFile to fd 2...
	I0731 12:15:02.844065    7094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:02.844194    7094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:15:02.845232    7094 out.go:298] Setting JSON to true
	I0731 12:15:02.861436    7094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4471,"bootTime":1722448831,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:15:02.861504    7094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:15:02.865376    7094 out.go:97] [download-only-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:15:02.865477    7094 notify.go:220] Checking for updates...
	I0731 12:15:02.869471    7094 out.go:169] MINIKUBE_LOCATION=19360
	I0731 12:15:02.872499    7094 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:15:02.875486    7094 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:15:02.878483    7094 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:15:02.881484    7094 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	W0731 12:15:02.887458    7094 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:15:02.887663    7094 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:15:02.890438    7094 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:15:02.890445    7094 start.go:297] selected driver: qemu2
	I0731 12:15:02.890448    7094 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:15:02.890491    7094 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:15:02.893315    7094 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:15:02.898669    7094 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:15:02.898764    7094 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:15:02.898808    7094 cni.go:84] Creating CNI manager for ""
	I0731 12:15:02.898815    7094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:15:02.898820    7094 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:15:02.898865    7094 start.go:340] cluster config:
	{Name:download-only-207000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:15:02.902357    7094 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:15:02.905447    7094 out.go:97] Starting "download-only-207000" primary control-plane node in "download-only-207000" cluster
	I0731 12:15:02.905453    7094 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:15:02.966391    7094 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:15:02.966414    7094 cache.go:56] Caching tarball of preloaded images
	I0731 12:15:02.966607    7094 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:15:02.970705    7094 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 12:15:02.970712    7094 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:03.045386    7094 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:15:10.762455    7094 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:10.762614    7094 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:11.304775    7094 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:15:11.304984    7094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-207000/config.json ...
	I0731 12:15:11.304999    7094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-207000/config.json: {Name:mk87ad85d59d433e4fe7c1d852ad0d461b10fe23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:15:11.305221    7094 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:15:11.305348    7094 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-207000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-207000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-207000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (16.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-014000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-014000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (16.336765208s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (16.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-014000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-014000: exit status 85 (77.220375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-537000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-537000             | download-only-537000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | -o=json --download-only             | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | -p download-only-207000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| delete  | -p download-only-207000             | download-only-207000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | -o=json --download-only             | download-only-014000 | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | -p download-only-014000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:15:13
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:15:13.183770    7118 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:15:13.183896    7118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:13.183899    7118 out.go:304] Setting ErrFile to fd 2...
	I0731 12:15:13.183902    7118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:13.184029    7118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:15:13.185064    7118 out.go:298] Setting JSON to true
	I0731 12:15:13.203168    7118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4482,"bootTime":1722448831,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:15:13.203246    7118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:15:13.207948    7118 out.go:97] [download-only-014000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:15:13.208029    7118 notify.go:220] Checking for updates...
	I0731 12:15:13.212150    7118 out.go:169] MINIKUBE_LOCATION=19360
	I0731 12:15:13.216125    7118 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:15:13.220253    7118 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:15:13.223170    7118 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:15:13.226192    7118 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	W0731 12:15:13.232136    7118 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:15:13.232288    7118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:15:13.235151    7118 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:15:13.235159    7118 start.go:297] selected driver: qemu2
	I0731 12:15:13.235163    7118 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:15:13.235207    7118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:15:13.245608    7118 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:15:13.251522    7118 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:15:13.251611    7118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:15:13.251646    7118 cni.go:84] Creating CNI manager for ""
	I0731 12:15:13.251654    7118 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:15:13.251663    7118 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:15:13.251709    7118 start.go:340] cluster config:
	{Name:download-only-014000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:15:13.255423    7118 iso.go:125] acquiring lock: {Name:mkee3b69eca7c34b057af3ec5b985c19350c9bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:15:13.258192    7118 out.go:97] Starting "download-only-014000" primary control-plane node in "download-only-014000" cluster
	I0731 12:15:13.258200    7118 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:15:13.318511    7118 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:15:13.318532    7118 cache.go:56] Caching tarball of preloaded images
	I0731 12:15:13.318715    7118 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:15:13.322921    7118 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 12:15:13.322929    7118 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:13.399284    7118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:15:21.814150    7118 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:21.814321    7118 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:15:22.333500    7118 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:15:22.333701    7118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-014000/config.json ...
	I0731 12:15:22.333721    7118 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-6578/.minikube/profiles/download-only-014000/config.json: {Name:mk634736c160bfac70bffaaf33418bbcacf4ef60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:15:22.333947    7118 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:15:22.334070    7118 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19360-6578/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-014000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-014000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-014000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-327000 --alsologtostderr --binary-mirror http://127.0.0.1:51044 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-327000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-327000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-728000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-728000: exit status 85 (61.384584ms)

                                                
                                                
-- stdout --
	* Profile "addons-728000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-728000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-728000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-728000: exit status 85 (57.416542ms)

                                                
                                                
-- stdout --
	* Profile "addons-728000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-728000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.45s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status: exit status 7 (31.280917ms)

                                                
                                                
-- stdout --
	nospam-924000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status: exit status 7 (29.519917ms)

                                                
                                                
-- stdout --
	nospam-924000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status: exit status 7 (29.955458ms)

                                                
                                                
-- stdout --
	nospam-924000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause: exit status 83 (40.354291ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-924000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause: exit status 83 (38.931625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-924000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause: exit status 83 (38.816459ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-924000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause: exit status 83 (38.752709ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-924000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause: exit status 83 (38.888875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-924000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause: exit status 83 (39.6695ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-924000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 stop: (1.972283584s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 stop: (3.218810875s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-924000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-924000 stop: (3.255004833s)
--- PASS: TestErrorSpam/stop (8.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19360-6578/.minikube/files/etc/test/nested/copy/7068/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4181754627/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cache add minikube-local-cache-test:functional-419000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 cache delete minikube-local-cache-test:functional-419000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-419000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 config get cpus: exit status 14 (30.784417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 config get cpus: exit status 14 (32.373042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-419000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-419000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (160.388167ms)

                                                
                                                
-- stdout --
	* [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:17:06.279733    7680 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:17:06.279900    7680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.279904    7680 out.go:304] Setting ErrFile to fd 2...
	I0731 12:17:06.279907    7680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.280087    7680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:17:06.281421    7680 out.go:298] Setting JSON to false
	I0731 12:17:06.301210    7680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4595,"bootTime":1722448831,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:17:06.301277    7680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:17:06.305611    7680 out.go:177] * [functional-419000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:17:06.313357    7680 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:17:06.313423    7680 notify.go:220] Checking for updates...
	I0731 12:17:06.320346    7680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:17:06.323358    7680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:17:06.326442    7680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:17:06.329271    7680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:17:06.332335    7680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:17:06.335741    7680 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:17:06.336044    7680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:17:06.340305    7680 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:17:06.347308    7680 start.go:297] selected driver: qemu2
	I0731 12:17:06.347314    7680 start.go:901] validating driver "qemu2" against &{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:17:06.347366    7680 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:17:06.354291    7680 out.go:177] 
	W0731 12:17:06.358358    7680 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 12:17:06.361262    7680 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-419000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-419000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-419000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.780291ms)

                                                
                                                
-- stdout --
	* [functional-419000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:17:06.504664    7691 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:17:06.504773    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.504777    7691 out.go:304] Setting ErrFile to fd 2...
	I0731 12:17:06.504780    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:17:06.504913    7691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-6578/.minikube/bin
	I0731 12:17:06.506354    7691 out.go:298] Setting JSON to false
	I0731 12:17:06.523277    7691 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4595,"bootTime":1722448831,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:17:06.523346    7691 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:17:06.528316    7691 out.go:177] * [functional-419000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0731 12:17:06.535355    7691 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 12:17:06.535408    7691 notify.go:220] Checking for updates...
	I0731 12:17:06.543334    7691 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	I0731 12:17:06.547365    7691 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:17:06.550323    7691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:17:06.553380    7691 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	I0731 12:17:06.556385    7691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:17:06.559651    7691 config.go:182] Loaded profile config "functional-419000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:17:06.559913    7691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:17:06.564332    7691 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0731 12:17:06.571272    7691 start.go:297] selected driver: qemu2
	I0731 12:17:06.571281    7691 start.go:901] validating driver "qemu2" against &{Name:functional-419000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-419000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:17:06.571333    7691 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:17:06.577297    7691 out.go:177] 
	W0731 12:17:06.581203    7691 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 12:17:06.585296    7691 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.747606958s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-419000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image rm docker.io/kicbase/echo-server:functional-419000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-419000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 image save --daemon docker.io/kicbase/echo-server:functional-419000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-419000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "43.842ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.942375ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "46.091584ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.586542ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.01393s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-419000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-419000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-419000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-419000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-665000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-665000 --output=json --user=testUser: (3.123331917s)
--- PASS: TestJSONOutput/stop/Command (3.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-575000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-575000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.781167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1c72d43f-8df2-47a1-8def-5356ed9ac2b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-575000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ae70425-2ba7-4fae-972b-adbe7512e966","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"3bc30a92-fc9c-482e-a9e4-67cecf22969a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig"}}
	{"specversion":"1.0","id":"4ba08bd3-fe2e-4e1d-b4f0-a63ab9a09abb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"beaec0e6-67d1-46e8-80d0-59cc1ec3a236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7962e32-f743-4064-8523-c6c0f68c8c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube"}}
	{"specversion":"1.0","id":"724b07d5-0e16-45b7-a083-8f171685c180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4bc8ed4e-23e2-412e-8ab4-f2bf5ba85fef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-575000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-575000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-443000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-492000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.855042ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-492000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-6578/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-6578/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.439083ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-492000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-492000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-492000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-492000: (1.829860416s)
--- PASS: TestNoKubernetes/serial/Stop (1.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.471209ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-492000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-492000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-739000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-739000 --alsologtostderr -v=3: (3.712229625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-739000 -n old-k8s-version-739000: exit status 7 (60.444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-739000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-421000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-421000 --alsologtostderr -v=3: (2.082883042s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (56.91475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-421000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-132000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-132000 --alsologtostderr -v=3: (3.642716708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-132000 -n embed-certs-132000: exit status 7 (55.77575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-132000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-321000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-321000 --alsologtostderr -v=3: (2.075339917s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-321000 -n default-k8s-diff-port-321000: exit status 7 (58.415125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-321000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-949000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-949000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-949000 --alsologtostderr -v=3: (3.595093833s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-949000 -n newest-cni-949000: exit status 7 (57.033417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-949000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2803417913/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722453391165674000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2803417913/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722453391165674000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2803417913/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722453391165674000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2803417913/001/test-1722453391165674000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (51.445042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.602834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.411917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.969791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.777625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.385292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.002917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo umount -f /mount-9p": exit status 83 (46.568375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2803417913/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port24648578/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.418458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.115ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.523208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.515208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.5075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.493917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.713458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "sudo umount -f /mount-9p": exit status 83 (46.726834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-419000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port24648578/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361974752/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361974752/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361974752/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1: exit status 83 (82.498792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1: exit status 83 (83.091083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1: exit status 83 (85.474792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1: exit status 83 (84.103666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1: exit status 83 (88.312292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1: exit status 83 (87.62675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-419000 ssh "findmnt -T" /mount1: exit status 83 (85.474541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-419000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-419000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361974752/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361974752/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-419000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361974752/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.59s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-782000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-782000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-782000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-782000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782000"

                                                
                                                
----------------------- debugLogs end: cilium-782000 [took: 2.201916375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-782000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-782000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-362000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard